Sample records for previous models based

  1. Effects of linking a soil-water-balance model with a groundwater-flow model

    USGS Publications Warehouse

    Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.

    2013-01-01

    A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.

  2. Boosting with Averaged Weight Vectors

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.

  3. Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning

    PubMed Central

    Konovalov, Arkady; Krajbich, Ian

    2016-01-01

    Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383

  4. Agent-based modeling of the spread of the 1918-1919 flu in three Canadian fur trading communities.

    PubMed

    O'Neil, Caroline A; Sattenspiel, Lisa

    2010-01-01

    Previous attempts to study the 1918-1919 flu in three small communities in central Manitoba have used both three-community population-based and single-community agent-based models. These studies identified critical factors influencing epidemic spread, but they also left important questions unanswered. The objective of this project was to design a more realistic agent-based model that would overcome limitations of earlier models and provide new insights into these outstanding questions. The new model extends the previous agent-based model to three communities so that results can be compared to those from the population-based model. Sensitivity testing was conducted, and the new model was used to investigate the influence of seasonal settlement and mobility patterns, the geographic heterogeneity of the observed 1918-1919 epidemic in Manitoba, and other questions addressed previously. Results confirm outcomes from the population-based model that suggest that (a) social organization and mobility strongly influence the timing and severity of epidemics and (b) the impact of the epidemic would have been greater if it had arrived in the summer rather than the winter. New insights from the model suggest that the observed heterogeneity among communities in epidemic impact was not unusual and would have been the expected outcome given settlement structure and levels of interaction among communities. Application of an agent-based computer simulation has helped to better explain observed patterns of spread of the 1918-1919 flu epidemic in central Manitoba. Contrasts between agent-based and population-based models illustrate the advantages of agent-based models for the study of small populations. © 2010 Wiley-Liss, Inc.

  5. Directivity models produced for the Next Generation Attenuation West 2 (NGA-West 2) project

    USGS Publications Warehouse

    Spudich, Paul A.; Watson-Lamprey, Jennie; Somerville, Paul G.; Bayless, Jeff; Shahi, Shrey; Baker, Jack W.; Rowshandel, Badie; Chiou, Brian

    2012-01-01

    Five new directivity models are being developed for the NGA-West 2 project. All are based on the NGA-West 2 data base, which is considerably expanded from the original NGA-West data base, containing about 3,000 more records from earthquakes having finite-fault rupture models. All of the new directivity models have parameters based on fault dimension in km, not normalized fault dimension. This feature removes a peculiarity of previous models which made them inappropriate for modeling large magnitude events on long strike-slip faults. Two models are explicitly, and one is implicitly, 'narrowband' models, in which the effect of directivity does not monotonically increase with spectral period but instead peaks at a specific period that is a function of earthquake magnitude. These narrowband models' functional forms are capable of simulating directivity over a wider range of earthquake magnitude than previous models. The functional forms of the five models are presented.

  6. Differentiation and Exploration of Model MACP for HE VER 1.0 on Prototype Performance Measurement Application for Higher Education

    NASA Astrophysics Data System (ADS)

    El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis

    2018-02-01

    Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.

  7. A mathematical programming method for formulating a fuzzy regression model based on distance criterion.

    PubMed

    Chen, Liang-Hsuan; Hsueh, Chan-Ching

    2007-06-01

    Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.

  8. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction

    EPA Science Inventory

    Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-v...

  9. Preliminary Multivariable Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored

  10. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction - poster

    EPA Science Inventory

    Building new physiologically based pharmacokinetic (PBPK) models requires a lot data, such as the chemical-specific parameters and in vivo pharmacokinetic data. Previously-developed, well-parameterized, and thoroughly-vetted models can be great resource for supporting the constr...

  11. The Impact of Secondary School Students' Preconceptions on the Evolution of their Mental Models of the Greenhouse effect and Global Warming

    NASA Astrophysics Data System (ADS)

    Reinfried, Sibylle; Tempelmann, Sebastian

    2014-01-01

    This paper provides a video-based learning process study that investigates the kinds of mental models of the atmospheric greenhouse effect 13-year-old learners have and how these mental models change with a learning environment, which is optimised in regard to instructional psychology. The objective of this explorative study was to observe and analyse the learners' learning pathways according to their previous knowledge in detail and to understand the mental model formation processes associated with them more precisely. For the analysis of the learning pathways, drawings, texts, video and interview transcripts from 12 students were studied using qualitative methods. The learning pathways pursued by the learners significantly depend on their domain-specific previous knowledge. The learners' preconceptions could be typified based on specific characteristics, whereby three preconception types could be formed. The 'isolated pieces of knowledge' type of learners, who have very little or no previous knowledge about the greenhouse effect, build new mental models that are close to the target model. 'Reduced heat output' type of learners, who have previous knowledge that indicates compliances with central ideas of the normative model, reconstruct their knowledge by reorganising and interpreting their existing knowledge structures. 'Increasing heat input' type of learners, whose previous knowledge consists of subjective worldly knowledge, which has a greater personal explanatory value than the information from the learning environment, have more difficulties changing their mental models. They have to fundamentally reconstruct their mental models.

  12. Identifying Multiple Levels of Discussion-Based Teaching Strategies for Constructing Scientific Models

    ERIC Educational Resources Information Center

    Williams, Grant; Clement, John

    2015-01-01

    This study sought to identify specific types of discussion-based strategies that two successful high school physics teachers using a model-based approach utilized in attempting to foster students' construction of explanatory models for scientific concepts. We found evidence that, in addition to previously documented dialogical strategies that…

  13. AveBoost2: Boosting for Noisy Data

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.

  14. Spectral Classes for FAA's Integrated Noise Model Version 6.0.

    DOT National Transportation Integrated Search

    1999-12-07

    The starting point in any empirical model such as the Federal Aviation Administrations (FAA) : Integrated Noise Model (INM) is a reference data base. In Version 5.2 and in previous versions : the reference data base consisted solely of a set of no...

  15. A continuum mechanics-based musculo-mechanical model for esophageal transport

    NASA Astrophysics Data System (ADS)

    Kou, Wenjun; Griffith, Boyce E.; Pandolfino, John E.; Kahrilas, Peter J.; Patankar, Neelesh A.

    2017-11-01

    In this work, we extend our previous esophageal transport model using an immersed boundary (IB) method with discrete fiber-based structural model, to one using a continuum mechanics-based model that is approximated based on finite elements (IB-FE). To deal with the leakage of flow when the Lagrangian mesh becomes coarser than the fluid mesh, we employ adaptive interaction quadrature points to deal with Lagrangian-Eulerian interaction equations based on a previous work (Griffith and Luo [1]). In particular, we introduce a new anisotropic adaptive interaction quadrature rule. The new rule permits us to vary the interaction quadrature points not only at each time-step and element but also at different orientations per element. This helps to avoid the leakage issue without sacrificing the computational efficiency and accuracy in dealing with the interaction equations. For the material model, we extend our previous fiber-based model to a continuum-based model. We present formulations for general fiber-reinforced material models in the IB-FE framework. The new material model can handle non-linear elasticity and fiber-matrix interactions, and thus permits us to consider more realistic material behavior of biological tissues. To validate our method, we first study a case in which a three-dimensional short tube is dilated. Results on the pressure-displacement relationship and the stress distribution matches very well with those obtained from the implicit FE method. We remark that in our IB-FE case, the three-dimensional tube undergoes a very large deformation and the Lagrangian mesh-size becomes about 6 times of Eulerian mesh-size in the circumferential orientation. To validate the performance of the method in handling fiber-matrix material models, we perform a second study on dilating a long fiber-reinforced tube. Errors are small when we compare numerical solutions with analytical solutions. The technique is then applied to the problem of esophageal transport. We use two fiber-reinforced models for the esophageal tissue: a bi-linear model and an exponential model. We present three cases on esophageal transport that differ in the material model and the muscle fiber architecture. The overall transport features are consistent with those observed from the previous model. We remark that the continuum-based model can handle more realistic and complicated material behavior. This is demonstrated in our third case where a spatially varying fiber architecture is included based on experimental study. We find that this unique muscle fiber architecture could generate a so-called pressure transition zone, which is a luminal pressure pattern that is of clinical interest. This suggests an important role of muscle fiber architecture in esophageal transport.

  16. Agent-based model for rural-urban migration: A dynamic consideration

    NASA Astrophysics Data System (ADS)

    Cai, Ning; Ma, Hai-Ying; Khan, M. Junaid

    2015-10-01

    This paper develops a dynamic agent-based model for rural-urban migration, based on the previous relevant works. The model conforms to the typical dynamic linear multi-agent systems model concerned extensively in systems science, in which the communication network is formulated as a digraph. Simulations reveal that consensus of certain variable could be harmful to the overall stability and should be avoided.

  17. Trace Assessment for BWR ATWS Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, L.Y.; Diamond, D.; Arantxa Cuadra, Gilad Raitses, Arnold Aronson

    2010-04-22

    A TRACE/PARCS input model has been developed in order to be able to analyze anticipated transients without scram (ATWS) in a boiling water reactor. The model is based on one developed previously for the Browns Ferry reactor for doing loss-of-coolant accident analysis. This model was updated by adding the control systems needed for ATWS and a core model using PARCS. The control systems were based on models previously developed for the TRAC-B code. The PARCS model is based on information (e.g., exposure and moderator density (void) history distributions) obtained from General Electric Hitachi and cross sections for GE14 fuel obtainedmore » from an independent source. The model is able to calculate an ATWS, initiated by the closure of main steam isolation valves, with recirculation pump trip, water level control, injection of borated water from the standby liquid control system and actuation of the automatic depres-surization system. The model is not considered complete and recommendations are made on how it should be improved.« less

  18. Predicting Plywood Properties with Wood-based Composite Models

    Treesearch

    Christopher Adam Senalik; Robert J. Ross

    2015-01-01

    Previous research revealed that stress wave nondestructive testing techniques could be used to evaluate the tensile and flexural properties of wood-based composite materials. Regression models were developed that related stress wave transmission characteristics (velocity and attenuation) to modulus of elasticity and strength. The developed regression models accounted...

  19. Development and evaluation of a physics-based windblown dust emission scheme implemented in the CMAQ modeling system

    EPA Science Inventory

    A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...

  20. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions.

    PubMed

    Krajbich, Ian; Rangel, Antonio

    2011-08-16

    How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.

  1. The Nature of Study Programmes in Vocational Education: Evaluation of the Model for Comprehensive Competence-Based Vocational Education in the Netherlands

    ERIC Educational Resources Information Center

    Sturing, Lidwien; Biemans, Harm J. A.; Mulder, Martin; de Bruijn, Elly

    2011-01-01

    In a previous series of studies, a model of comprehensive competence-based vocational education (CCBE model) was developed, consisting of eight principles of competence-based vocational education (CBE) that were elaborated for four implementation levels (Wesselink et al. "European journal of vocational training" 40:38-51 2007a). The…

  2. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression

    ERIC Educational Resources Information Center

    Weiss, Brandi A.; Dardick, William

    2016-01-01

    This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify…

  3. A component-based, integrated spatially distributed hydrologic/water quality model: AgroEcoSystem-Watershed (AgES-W) overview and application

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components. The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the addition of nitrogen (N) and sediment modeling compo...

  4. A More Pedagogically Sound Treatment of Beer's Law: A Derivation Based on a Corpuscular-Probability Model

    NASA Astrophysics Data System (ADS)

    Bare, William D.

    2000-07-01

    An argument is presented which suggests that the commonly seen calculus-based derivations of Beer's law may not be adequately useful to students and may in fact contribute to widely held misconceptions about the interaction of light with absorbing samples. For this reason, an alternative derivation of Beer's law based on a corpuscular model and the laws of probability is presented. Unlike many previously reported derivations, that presented here does not require the use of calculus, nor does it require the assumption of absorption properties in an infinitesimally thin film. The corpuscular-probability model and its accompanying derivation of Beer's law are believed to comprise a more pedagogically effective presentation than those presented previously.

  5. Probabilistic representation in syllogistic reasoning: A theory to integrate mental models and heuristics.

    PubMed

    Hattori, Masasi

    2016-12-01

    This paper presents a new theory of syllogistic reasoning. The proposed model assumes there are probabilistic representations of given signature situations. Instead of conducting an exhaustive search, the model constructs an individual-based "logical" mental representation that expresses the most probable state of affairs, and derives a necessary conclusion that is not inconsistent with the model using heuristics based on informativeness. The model is a unification of previous influential models. Its descriptive validity has been evaluated against existing empirical data and two new experiments, and by qualitative analyses based on previous empirical findings, all of which supported the theory. The model's behavior is also consistent with findings in other areas, including working memory capacity. The results indicate that people assume the probabilities of all target events mentioned in a syllogism to be almost equal, which suggests links between syllogistic reasoning and other areas of cognition. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  6. Full velocity difference model for a car-following theory.

    PubMed

    Jiang, R; Wu, Q; Zhu, Z

    2001-07-01

    In this paper, we present a full velocity difference model for a car-following theory based on the previous models in the literature. To our knowledge, the model is an improvement over the previous ones theoretically, because it considers more aspects in car-following process than others. This point is verified by numerical simulation. Then we investigate the property of the model using both analytic and numerical methods, and find that the model can describe the phase transition of traffic flow and estimate the evolution of traffic congestion.

  7. HLPI-Ensemble: Prediction of human lncRNA-protein interactions based on ensemble strategy.

    PubMed

    Hu, Huan; Zhang, Li; Ai, Haixin; Zhang, Hui; Fan, Yetian; Zhao, Qi; Liu, Hongsheng

    2018-03-27

    LncRNA plays an important role in many biological and disease progression by binding to related proteins. However, the experimental methods for studying lncRNA-protein interactions are time-consuming and expensive. Although there are a few models designed to predict the interactions of ncRNA-protein, they all have some common drawbacks that limit their predictive performance. In this study, we present a model called HLPI-Ensemble designed specifically for human lncRNA-protein interactions. HLPI-Ensemble adopts the ensemble strategy based on three mainstream machine learning algorithms of Support Vector Machines (SVM), Random Forests (RF) and Extreme Gradient Boosting (XGB) to generate HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble, respectively. The results of 10-fold cross-validation show that HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble achieved AUCs of 0.95, 0.96 and 0.96, respectively, in the test dataset. Furthermore, we compared the performance of the HLPI-Ensemble models with the previous models through external validation dataset. The results show that the false positives (FPs) of HLPI-Ensemble models are much lower than that of the previous models, and other evaluation indicators of HLPI-Ensemble models are also higher than those of the previous models. It is further showed that HLPI-Ensemble models are superior in predicting human lncRNA-protein interaction compared with previous models. The HLPI-Ensemble is publicly available at: http://ccsipb.lnu.edu.cn/hlpiensemble/ .

  8. Exploring the effects of transducer models when training convolutional neural networks to eliminate reflection artifacts in experimental photoacoustic images

    NASA Astrophysics Data System (ADS)

    Allman, Derek; Reiter, Austin; Bell, Muyinatu

    2018-02-01

    We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.

  9. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  10. Categorical QSAR models for skin sensitization based on local lymph node assay measures and both ground and excited state 4D-fingerprint descriptors

    NASA Astrophysics Data System (ADS)

    Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Santos-Filho, Osvaldo A.; Esposito, Emilio X.; Hopfinger, Anton J.; Tseng, Yufeng J.

    2008-06-01

    In previous studies we have developed categorical QSAR models for predicting skin-sensitization potency based on 4D-fingerprint (4D-FP) descriptors and in vivo murine local lymph node assay (LLNA) measures. Only 4D-FP derived from the ground state (GMAX) structures of the molecules were used to build the QSAR models. In this study we have generated 4D-FP descriptors from the first excited state (EMAX) structures of the molecules. The GMAX, EMAX and the combined ground and excited state 4D-FP descriptors (GEMAX) were employed in building categorical QSAR models. Logistic regression (LR) and partial least square coupled logistic regression (PLS-CLR), found to be effective model building for the LLNA skin-sensitization measures in our previous studies, were used again in this study. This also permitted comparison of the prior ground state models to those involving first excited state 4D-FP descriptors. Three types of categorical QSAR models were constructed for each of the GMAX, EMAX and GEMAX datasets: a binary model (2-state), an ordinal model (3-state) and a binary-binary model (two-2-state). No significant differences exist among the LR 2-state model constructed for each of the three datasets. However, the PLS-CLR 3-state and 2-state models based on the EMAX and GEMAX datasets have higher predictivity than those constructed using only the GMAX dataset. These EMAX and GMAX categorical models are also more significant and predictive than corresponding models built in our previous QSAR studies of LLNA skin-sensitization measures.

  11. Physiologically based Pharmacokinetic Modeling of 1,4-Dioxane in Rats, Mice, and Humans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweeney, Lisa M.; Thrall, Karla D.; Poet, Torka S.

    2008-01-01

    ABSTRACT 1,4-Dioxane (CAS No. 123-91-1) is used primarily as a solvent or as a solvent stabilizer. It can cause lung, liver and kidney damage at sufficiently high exposure levels. Two physiologically-based pharmacokinetic (PBPK) models of 1,4-dioxane and its major metabolite, hydroxyethoxyacetic acid (HEAA), were published in 1990. These models have uncertainties and deficiencies that could be addressed and the model strengthened for use in a contemporary cancer risk assessment for 1,4-dioxane. Studies were performed to fill data gaps and reduce uncertainties pertaining to the pharmacokinetics of 1,4-dioxane and HEAA in rats, mice, and humans. Three types of studies were performed:partitionmore » coefficient measurements, blood time course in mice, and in vitro pharmacokinetics using rat, mouse, and human hepatocytes. Updated PBPK models were developed based on these new data and previously available data. The optimized rate of metabolism for the mouse was significantly higher than the value previously estimated. The optimized rat kinetic parameters were similar to those in the 1990 models. Only two human studies were identified. Model predictions were consistent with one study, but did not fit the second as well. In addition, a rat nasal exposure was completed. The results confirmed water directly contacts rat nasal tissues during drinking water under bioassays. Consistent with previous PBPK models, nasal tissues were not specifically included in the model. Use of these models will reduce the uncertainty in future 1,4-dioxane risk assessments.« less

  12. Applying Model Analysis to a Resource-Based Analysis of the Force and Motion Conceptual Evaluation

    ERIC Educational Resources Information Center

    Smith, Trevor I.; Wittmann, Michael C.; Carter, Tom

    2014-01-01

    Previously, we analyzed the Force and Motion Conceptual Evaluation in terms of a resources-based model that allows for clustering of questions so as to provide useful information on how students correctly or incorrectly reason about physics. In this paper, we apply model analysis to show that the associated model plots provide more information…

  13. Previous Experience Not Required: Contextualizing the Choice to Teach School-Based Agricultural Education

    ERIC Educational Resources Information Center

    Marx, Adam A.; Smith, Amy R.; Smalley, Scott W.; Miller, Courtney

    2017-01-01

    The purpose of this study was to identify key career choice items which lead students without previous experience in school-based agricultural education (SBAE) to pursuing agricultural education. The Ag Ed FIT-Choice® model adapted by Lawver (2009) and developed by Richardson and Watt (2006) provided the investigative framework to design this…

  14. Presentation a New Model to Measure National Power of the Countries

    NASA Astrophysics Data System (ADS)

    Hafeznia, Mohammad Reza; Hadi Zarghani, Seyed; Ahmadipor, Zahra; Roknoddin Eftekhari, Abdelreza

    In this research, based on the assessment of previous models for the evaluation of national power, a new model is presented to measure national power; it is much better than previous models. Paying attention to all the aspects of national power (economical, social, cultural, political, military, astro-space, territorial, scientific and technological and transnational), paying attention to the usage of 87 factors, stressing the usage of new and strategically compatible variables to the current time are some of the benefits of this model. Also using the Delphi method and referring to the opinions of experts about determining the role and importance of variables affecting national power, the option of drawing out the global power structure are some the other advantages that this model has compared to previous ones.

  15. Synchrotron Protein Footprinting Supports Substrate Translocation by ClpA via ATP-Induced Movements of the D2 Loop

    PubMed Central

    Bohon, Jen; Jennings, Laura D.; Phillips, Christine M.; Licht, Stuart; Chance, Mark R.

    2010-01-01

    SUMMARY Synchrotron x-ray protein footprinting is used to study structural changes upon formation of the ClpA hexamer. Comparative solvent accessibilities between ClpA monomer and ClpA hexamer samples are in agreement throughout most of the sequence with calculations based on two previously proposed hexameric models. The data differ substantially from the proposed models in two parts of the structure: the D1 sensor 1 domain and the D2 loop region. The results suggest that these two regions can access alternate conformations in which their solvent protection is greater than in the structural models based on crystallographic data. In combination with previously reported structural data, the footprinting data provide support for a revised model in which the D2 loop contacts the D1 sensor 1 domain in the ATP-bound form of the complex. These data provide the first direct experimental support for the nucleotide-dependent D2 loop conformational change previously proposed to mediate substrate translocation. PMID:18682217

  16. A bicycle safety index for evaluating urban street facilities.

    PubMed

    Asadi-Shekari, Zohreh; Moeinaddini, Mehdi; Zaly Shah, Muhammad

    2015-01-01

    The objectives of this research are to conceptualize the Bicycle Safety Index (BSI) that considers all parts of the street and to propose a universal guideline with microscale details. A point system method comparing existing safety facilities to a defined standard is proposed to estimate the BSI. Two streets in Singapore and Malaysia are chosen to examine this model. The majority of previous measurements to evaluate street conditions for cyclists usually cannot cover all parts of streets, including segments and intersections. Previous models also did not consider all safety indicators and cycling facilities at a microlevel in particular. This study introduces a new concept of a practical BSI to complete previous studies using its practical, easy-to-follow, point system-based outputs. This practical model can be used in different urban settings to estimate the level of safety for cycling and suggest some improvements based on the standards.

  17. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.

    PubMed

    Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z

    Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.

  18. Quasispecies dynamics on a network of interacting genotypes and idiotypes: formulation of the model

    NASA Astrophysics Data System (ADS)

    Barbosa, Valmir C.; Donangelo, Raul; Souza, Sergio R.

    2015-01-01

    A quasispecies is the stationary state of a set of interrelated genotypes that evolve according to the usual principles of selection and mutation. Quasispecies studies have for the most part concentrated on the possibility of errors during genotype replication and their role in promoting either the survival or the demise of the quasispecies. In a previous work, we introduced a network model of quasispecies dynamics, based on a single probability parameter (p) and capable of addressing several plausibility issues of previous models. Here we extend that model by pairing its network with another one aimed at modeling the dynamics of the immune system when confronted with the quasispecies. The new network is based on the idiotypic-network model of immunity and, together with the previous one, constitutes a network model of interacting genotypes and idiotypes. The resulting model requires further parameters and as a consequence leads to a vast phase space. We have focused on a particular niche in which it is possible to observe the trade-offs involved in the quasispecies' survival or destruction. Within this niche, we give simulation results that highlight some key preconditions for quasispecies survival. These include a minimum initial abundance of genotypes relative to that of the idiotypes and a minimum value of p. The latter, in particular, is to be contrasted with the stand-alone quasispecies network of our previous work, in which arbitrarily low values of p constitute a guarantee of quasispecies survival.

  19. Hemorrhage and Hemorrhagic Shock in Swine: A Review

    DTIC Science & Technology

    1989-11-01

    17 Temperature Regulation ....................... 18 Blood Gas and Acid- Base Status ....................... 18 Electrolyte...22 Renal Function .................................. 23 Hepatic Function ................................ 24 Central Nervous System Function...MODELS Most porcine hemorrhage models are based on concepts and procedures previously developed in other species, especially the dog. As a consequence

  20. Predicting RNA folding thermodynamics with a reduced chain representation model

    PubMed Central

    CAO, SONG; CHEN, SHI-JIE

    2005-01-01

    Based on the virtual bond representation for the nucleotide backbone, we develop a reduced conformational model for RNA. We use the experimentally measured atomic coordinates to model the helices and use the self-avoiding walks in a diamond lattice to model the loop conformations. The atomic coordinates of the helices and the lattice representation for the loops are matched at the loop–helix junction, where steric viability is accounted for. Unlike the previous simplified lattice-based models, the present virtual bond model can account for the atomic details of realistic three-dimensional RNA structures. Based on the model, we develop a statistical mechanical theory for RNA folding energy landscapes and folding thermodynamics. Tests against experiments show that the theory can give much more improved predictions for the native structures, the thermal denaturation curves, and the equilibrium folding/unfolding pathways than the previous models. The application of the model to the P5abc region of Tetrahymena group I ribozyme reveals the misfolded intermediates as well as the native-like intermediates in the equilibrium folding process. Moreover, based on the free energy landscape analysis for each and every loop mutation, the model predicts five lethal mutations that can completely alter the free energy landscape and the folding stability of the molecule. PMID:16251382

  1. Revisiting the global surface energy budgets with maximum-entropy-production model of surface heat fluxes

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng

    2017-09-01

    The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.

  2. Development and external validation of new ultrasound-based mathematical models for preoperative prediction of high-risk endometrial cancer.

    PubMed

    Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E

    2014-05-01

    To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.

  3. CAT Model with Personalized Algorithm for Evaluation of Estimated Student Knowledge

    ERIC Educational Resources Information Center

    Andjelic, Svetlana; Cekerevac, Zoran

    2014-01-01

    This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the…

  4. An Amorphous Model for Morphological Processing in Visual Comprehension Based on Naive Discriminative Learning

    ERIC Educational Resources Information Center

    Baayen, R. Harald; Milin, Petar; Durdevic, Dusica Filipovic; Hendrix, Peter; Marelli, Marco

    2011-01-01

    A 2-layer symbolic network model based on the equilibrium equations of the Rescorla-Wagner model (Danks, 2003) is proposed. The study first presents 2 experiments in Serbian, which reveal for sentential reading the inflectional paradigmatic effects previously observed by Milin, Filipovic Durdevic, and Moscoso del Prado Martin (2009) for unprimed…

  5. Genetic demographic networks: Mathematical model and applications.

    PubMed

    Kimmel, Marek; Wojdyła, Tomasz

    2016-10-01

    Recent improvement in the quality of genetic data obtained from extinct human populations and their ancestors encourages searching for answers to basic questions regarding human population history. The most common and successful are model-based approaches, in which genetic data are compared to the data obtained from the assumed demography model. Using such approach, it is possible to either validate or adjust assumed demography. Model fit to data can be obtained based on reverse-time coalescent simulations or forward-time simulations. In this paper we introduce a computational method based on mathematical equation that allows obtaining joint distributions of pairs of individuals under a specified demography model, each of them characterized by a genetic variant at a chosen locus. The two individuals are randomly sampled from either the same or two different populations. The model assumes three types of demographic events (split, merge and migration). Populations evolve according to the time-continuous Moran model with drift and Markov-process mutation. This latter process is described by the Lyapunov-type equation introduced by O'Brien and generalized in our previous works. Application of this equation constitutes an original contribution. In the result section of the paper we present sample applications of our model to both simulated and literature-based demographies. Among other we include a study of the Slavs-Balts-Finns genetic relationship, in which we model split and migrations between the Balts and Slavs. We also include another example that involves the migration rates between farmers and hunters-gatherers, based on modern and ancient DNA samples. This latter process was previously studied using coalescent simulations. Our results are in general agreement with the previous method, which provides validation of our approach. Although our model is not an alternative to simulation methods in the practical sense, it provides an algorithm to compute pairwise distributions of alleles, in the case of haploid non-recombining loci such as mitochondrial and Y-chromosome loci in humans. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. An Interdisciplinary Model for Teaching Evolutionary Ecology.

    ERIC Educational Resources Information Center

    Coletta, John

    1992-01-01

    Describes a general systems evolutionary model and demonstrates how a previously established ecological model is a function of its past development based on the evolution of the rock, nutrient, and water cycles. Discusses the applications of the model in environmental education. (MDH)

  7. PharmDock: a pharmacophore-based docking program

    PubMed Central

    2014-01-01

    Background Protein-based pharmacophore models are enriched with the information of potential interactions between ligands and the protein target. We have shown in a previous study that protein-based pharmacophore models can be applied for ligand pose prediction and pose ranking. In this publication, we present a new pharmacophore-based docking program PharmDock that combines pose sampling and ranking based on optimized protein-based pharmacophore models with local optimization using an empirical scoring function. Results Tests of PharmDock on ligand pose prediction, binding affinity estimation, compound ranking and virtual screening yielded comparable or better performance to existing and widely used docking programs. The docking program comes with an easy-to-use GUI within PyMOL. Two features have been incorporated in the program suite that allow for user-defined guidance of the docking process based on previous experimental data. Docking with those features demonstrated superior performance compared to unbiased docking. Conclusion A protein pharmacophore-based docking program, PharmDock, has been made available with a PyMOL plugin. PharmDock and the PyMOL plugin are freely available from http://people.pharmacy.purdue.edu/~mlill/software/pharmdock. PMID:24739488

  8. Approaches to Classroom-Based Computational Science.

    ERIC Educational Resources Information Center

    Guzdial, Mark

    Computational science includes the use of computer-based modeling and simulation to define and test theories about scientific phenomena. The challenge for educators is to develop techniques for implementing computational science in the classroom. This paper reviews some previous work on the use of simulation alone (without modeling), modeling…

  9. An alternate metabolic hypothesis for a binary mixture of trichloroethylene and carbon tetrachloride: application of physiologically based pharmacokinetic (PBPK) modeling in rats.

    EPA Science Inventory

    Carbon tetrachloride (CC4) and trichloroethylene (TCE) are hepatotoxic volatile organic compounds (VOCs) and environmental contaminants. Previous physiologically based pharmacokinetic (PBPK) models describe the kinetics ofindividual chemical disposition and metabolic clearance fo...

  10. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  11. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  12. Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate

    NASA Astrophysics Data System (ADS)

    Hall, James S.; Michaels, Jennifer E.

    2010-02-01

    Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.

  13. An Investigation of State-Space Model Fidelity for SSME Data

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2008-01-01

    In previous studies, a variety of unsupervised anomaly detection techniques for anomaly detection were applied to SSME (Space Shuttle Main Engine) data. The observed results indicated that the identification of certain anomalies were specific to the algorithmic method under consideration. This is the reason why one of the follow-on goals of these previous investigations was to build an architecture to support the best capabilities of all algorithms. We appeal to that goal here by investigating a cascade, serial architecture for the best performing and most suitable candidates from previous studies. As a precursor to a formal ROC (Receiver Operating Characteristic) curve analysis for validation of resulting anomaly detection algorithms, our primary focus here is to investigate the model fidelity as measured by variants of the AIC (Akaike Information Criterion) for state-space based models. We show that placing constraints on a state-space model during or after the training of the model introduces a modest level of suboptimality. Furthermore, we compare the fidelity of all candidate models including those embodying the cascade, serial architecture. We make recommendations on the most suitable candidates for application to subsequent anomaly detection studies as measured by AIC-based criteria.

  14. Internet-based system for simulation-based medical planning for cardiovascular disease.

    PubMed

    Steele, Brooke N; Draney, Mary T; Ku, Joy P; Taylor, Charles A

    2003-06-01

    Current practice in vascular surgery utilizes only diagnostic and empirical data to plan treatments, which does not enable quantitative a priori prediction of the outcomes of interventions. We have previously described simulation-based medical planning methods to model blood flow in arteries and plan medical treatments based on physiologic models. An important consideration for the design of these patient-specific modeling systems is the accessibility to physicians with modest computational resources. We describe a simulation-based medical planning environment developed for the World Wide Web (WWW) using the Virtual Reality Modeling Language (VRML) and the Java programming language.

  15. Performance of Renormalization Group Algebraic Turbulence Model on Boundary Layer Transition Simulation

    NASA Technical Reports Server (NTRS)

    Ahn, Kyung H.

    1994-01-01

    The RNG-based algebraic turbulence model, with a new method of solving the cubic equation and applying new length scales, is introduced. An analysis is made of the RNG length scale which was previously reported and the resulting eddy viscosity is compared with those from other algebraic turbulence models. Subsequently, a new length scale is introduced which actually uses the two previous RNG length scales in a systematic way to improve the model performance. The performance of the present RNG model is demonstrated by simulating the boundary layer flow over a flat plate and the flow over an airfoil.

  16. Modeling the Response of Primary Production and Sedimentation to Variable Nitrate Loading in the Mississippi River Plume

    DTIC Science & Technology

    2008-03-06

    oped based on previous observational studies in the MRP . Our annual variations in hypoxic zone size and resulted in suggestions model was developed by...nitrate loading. The nitrogen- based model consisted of nine compartments (nitrate, ammonium, labile dissolved organic nitrogen, bacteria, small...independent dataset of primary production measurements for different riverine N03 loads. Based on simulations over the range of observed springtime N03

  17. Integrated Planning Model (IPM) Base Case v.4.10

    EPA Pesticide Factsheets

    Learn about EPA's IPM Base Case v.4.10, including Proposed Transport Rule results, documentation, the National Electric Energy Data System (NEEDS) database and user's guide, and run results using previous base cases.

  18. Modeling spatio-temporal wildfire ignition point patterns

    Treesearch

    Amanda S. Hering; Cynthia L. Bell; Marc G. Genton

    2009-01-01

    We analyze and model the structure of spatio-temporal wildfire ignitions in the St. Johns River Water Management District in northeastern Florida. Previous studies, based on the K-function and an assumption of homogeneity, have shown that wildfire events occur in clusters. We revisit this analysis based on an inhomogeneous K-...

  19. Update on Parametric Cost Models for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl. H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2011-01-01

    Since the June 2010 Astronomy Conference, an independent review of our cost data base discovered some inaccuracies and inconsistencies which can modify our previously reported results. This paper will review changes to the data base, our confidence in those changes and their effect on various parametric cost models

  20. Application of a Tenax Model to Assess Bioavailability of Polychlorinated Biphenyls in Field Sediments

    EPA Science Inventory

    Recent literature has shown that bioavailability-based techniques, such as Tenax extraction, can estimate sediment exposure to benthos. In a previous study by the authors,Tenax extraction was used to create and validate a literature-based Tenax model to predict oligochaete bioac...

  1. Simulation optimization of PSA-threshold based prostate cancer screening policies

    PubMed Central

    Zhang, Jingyu; Denton, Brian T.; Shah, Nilay D.; Inman, Brant A.

    2013-01-01

    We describe a simulation optimization method to design PSA screening policies based on expected quality adjusted life years (QALYs). Our method integrates a simulation model in a genetic algorithm which uses a probabilistic method for selection of the best policy. We present computational results about the efficiency of our algorithm. The best policy generated by our algorithm is compared to previously recommended screening policies. Using the policies determined by our model, we present evidence that patients should be screened more aggressively but for a shorter length of time than previously published guidelines recommend. PMID:22302420

  2. Brief introductory guide to agent-based modeling and an illustration from urban health research.

    PubMed

    Auchincloss, Amy H; Garcia, Leandro Martin Totaro

    2015-11-01

    There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM) is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation.

  3. Brief introductory guide to agent-based modeling and an illustration from urban health research

    PubMed Central

    Auchincloss, Amy H.; Garcia, Leandro Martin Totaro

    2017-01-01

    There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM) is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation. PMID:26648364

  4. Phase-field-based multiple-relaxation-time lattice Boltzmann model for incompressible multiphase flows.

    PubMed

    Liang, H; Shi, B C; Guo, Z L; Chai, Z H

    2014-05-01

    In this paper, a phase-field-based multiple-relaxation-time lattice Boltzmann (LB) model is proposed for incompressible multiphase flow systems. In this model, one distribution function is used to solve the Chan-Hilliard equation and the other is adopted to solve the Navier-Stokes equations. Unlike previous phase-field-based LB models, a proper source term is incorporated in the interfacial evolution equation such that the Chan-Hilliard equation can be derived exactly and also a pressure distribution is designed to recover the correct hydrodynamic equations. Furthermore, the pressure and velocity fields can be calculated explicitly. A series of numerical tests, including Zalesak's disk rotation, a single vortex, a deformation field, and a static droplet, have been performed to test the accuracy and stability of the present model. The results show that, compared with the previous models, the present model is more stable and achieves an overall improvement in the accuracy of the capturing interface. In addition, compared to the single-relaxation-time LB model, the present model can effectively reduce the spurious velocity and fluctuation of the kinetic energy. Finally, as an application, the Rayleigh-Taylor instability at high Reynolds numbers is investigated.

  5. Validation of a Clinical Scoring System for Outcome Prediction in Dogs with Acute Kidney Injury Managed by Hemodialysis.

    PubMed

    Segev, G; Langston, C; Takada, K; Kass, P H; Cowgill, L D

    2016-05-01

    A scoring system for outcome prediction in dogs with acute kidney injury (AKI) recently has been developed but has not been validated. The scoring system previously developed for outcome prediction will accurately predict outcome in a validation cohort of dogs with AKI managed with hemodialysis. One hundred fifteen client-owned dogs with AKI. Medical records of dogs with AKI treated by hemodialysis between 2011 and 2015 were reviewed. Dogs were included only if all variables required to calculate the final predictive score were available, and the 30-day outcome was known. A predictive score for 3 models was calculated for each dog. Logistic regression was used to evaluate the association of the final predictive score with each model's outcome. Receiver operating curve (ROC) analyses were performed to determine sensitivity and specificity for each model based on previously established cut-off values. Higher scores for each model were associated with decreased survival probability (P < .001). Based on previously established cut-off values, 3 models (models A, B, C) were associated with sensitivities/specificities of 73/75%, 71/80%, and 75/86%, respectively, and correctly classified 74-80% of the dogs. All models were simple to apply and allowed outcome prediction that closely corresponded with actual outcome in an independent cohort. As expected, accuracies were slightly lower compared with those from the previously reported cohort used initially to develop the models. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  6. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    NASA Astrophysics Data System (ADS)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  7. A Bridge from Optical to Infrared Galaxies: Explaining Local Properties, Predicting Galaxy Counts and the Cosmic Background Radiation

    NASA Astrophysics Data System (ADS)

    Totani, T.; Takeuchi, T. T.

    2001-12-01

    A new model of infrared galaxy counts and the cosmic background radiation (CBR) is developed by extending a model for optical/near-infrared galaxies. Important new characteristics of this model are that mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies, and that the big grain dust temperature T dust is calculated based on a physical consideration for energy balance, rather than using the empirical relation between T dust and total infrared luminosity L IR found in local galaxies, which has been employed in most of previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, L IR-T dust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μ m) and CBR by this model. We found considerably different results from most of previous works based on the empirical L IR-T dust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40--80K). This indicates that intense starbursts of forming elliptical galaxies should have occurred at z ~ 2--3, in contrast to the previous results that significant starbursts beyond z ~ 1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE\\ detections of FIR CBR. The authors thank the financial support by the Japan Society for Promotion of Science.

  8. Phenomenological and molecular-level Petri net modeling and simulation of long-term potentiation.

    PubMed

    Hardy, S; Robillard, P N

    2005-10-01

    Petri net-based modeling methods have been used in many research projects to represent biological systems. Among these, the hybrid functional Petri net (HFPN) was developed especially for biological modeling in order to provide biologists with a more intuitive Petri net-based method. In the literature, HFPNs are used to represent kinetic models at the molecular level. We present two models of long-term potentiation previously represented by differential equations which we have transformed into HFPN models: a phenomenological synapse model and a molecular-level model of the CaMKII regulation pathway. Through simulation, we obtained results similar to those of previous studies using these models. Our results open the way to a new type of modeling for systems biology where HFPNs are used to combine different levels of abstraction within one model. This approach can be useful in fully modeling a system at the molecular level when kinetic data is missing or when a full study of a system at the molecular level it is not within the scope of the research.

  9. Simulation-based instruction of technical skills

    NASA Technical Reports Server (NTRS)

    Towne, Douglas M.; Munro, Allen

    1991-01-01

    A rapid intelligent tutoring development system (RAPIDS) was developed to facilitate the production of interactive, real-time graphical device models for use in instructing the operation and maintenance of complex systems. The tools allowed subject matter experts to produce device models by creating instances of previously defined objects and positioning them in the emerging device model. These simulation authoring functions, as well as those associated with demonstrating procedures and functional effects on the completed model, required no previous programming experience or use of frame-based instructional languages. Three large simulations were developed in RAPIDS, each involving more than a dozen screen-sized sections. Seven small, single-view applications were developed to explore the range of applicability. Three workshops were conducted to train others in the use of the authoring tools. Participants learned to employ the authoring tools in three to four days and were able to produce small working device models on the fifth day.

  10. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis.

    PubMed

    Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  11. On the Connection Between One-and Two-Equation Models of Turbulence

    NASA Technical Reports Server (NTRS)

    Menter, F. R.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    A formalism will be presented that allows the transformation of two-equation eddy viscosity turbulence models into one-equation models. The transformation is based on an assumption that is widely accepted over a large range of boundary layer flows and that has been shown to actually improve predictions when incorporated into two-equation models of turbulence. Based on that assumption, a new one-equation turbulence model will be derived. The new model will be tested in great detail against a previously introduced one-equation model and against its parent two-equation model.

  12. An improved simulation based biomechanical model to estimate static muscle loadings

    NASA Technical Reports Server (NTRS)

    Rajulu, Sudhakar L.; Marras, William S.; Woolford, Barbara

    1991-01-01

    The objectives of this study are to show that the characteristics of an intact muscle are different from those of an isolated muscle and to describe a simulation based model. This model, unlike the optimization based models, accounts for the redundancy in the musculoskeletal system in predicting the amount of forces generated within a muscle. The results of this study show that the loading of the primary muscle is increased by the presence of other muscle activities. Hence, the previous models based on optimization techniques may underestimate the severity of the muscle and joint loadings which occur during manual material handling tasks.

  13. A Comparison of Computational Cognitive Models: Agent-Based Systems Versus Rule-Based Architectures

    DTIC Science & Technology

    2003-03-01

    Java™ How To Program , Prentice Hall, 1999. Friedman-Hill, E., Jess, The Expert System Shell for the Java Platform, Sandia National Laboratories, 2001...transition from the descriptive NDM theory to a computational model raises several questions: Who is an experienced decision maker? How do you model the...progression from being a novice to an experienced decision maker? How does the model account for previous experiences? Are there situations where

  14. The Development and Application of the Coping with Bullying Scale for Children

    ERIC Educational Resources Information Center

    Parris, Leandra N.

    2013-01-01

    The Multidimensional Model for Coping with Bullying (MMCB; Parris, in development) was conceptualized based on a literature review of coping with bullying and by combining relevant aspects of previous models. Strategies were described based on their focus (problem-focused vs. emotion-focused) and orientation (avoidance, approach-self,…

  15. The Actualization of Literary Learning Model Based on Verbal-Linguistic Intelligence

    ERIC Educational Resources Information Center

    Hali, Nur Ihsan

    2017-01-01

    This article is inspired by Howard Gardner's concept of linguistic intelligence and also from some authors' previous writings. All of them became the authors' reference in developing ideas on constructing a literary learning model based on linguistic intelligence. The writing of this article is not done by collecting data empirically, but by…

  16. Chapter 6: Implementation of Model-Based Instruction--The Induction Years

    ERIC Educational Resources Information Center

    Gurvitch, Rachel; Blankenship, Bonnie Tjeerdsma

    2008-01-01

    In previous chapters, student teachers' views and the use of model-based instruction (MBI) were determined to be largely positive. But do these positive attitudes and the actual use of MBI continue after completing a teacher education program? Many novice teachers experience "washout" when the attitudes and instructional practices they…

  17. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  18. Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning

    PubMed Central

    Bath, Kevin G.; Daw, Nathaniel D.; Frank, Michael J.

    2016-01-01

    Considerable evidence suggests that multiple learning systems can drive behavior. Choice can proceed reflexively from previous actions and their associated outcomes, as captured by “model-free” learning algorithms, or flexibly from prospective consideration of outcomes that might occur, as captured by “model-based” learning algorithms. However, differential contributions of dopamine to these systems are poorly understood. Dopamine is widely thought to support model-free learning by modulating plasticity in striatum. Model-based learning may also be affected by these striatal effects, or by other dopaminergic effects elsewhere, notably on prefrontal working memory function. Indeed, prominent demonstrations linking striatal dopamine to putatively model-free learning did not rule out model-based effects, whereas other studies have reported dopaminergic modulation of verifiably model-based learning, but without distinguishing a prefrontal versus striatal locus. To clarify the relationships between dopamine, neural systems, and learning strategies, we combine a genetic association approach in humans with two well-studied reinforcement learning tasks: one isolating model-based from model-free behavior and the other sensitive to key aspects of striatal plasticity. Prefrontal function was indexed by a polymorphism in the COMT gene, differences of which reflect dopamine levels in the prefrontal cortex. This polymorphism has been associated with differences in prefrontal activity and working memory. Striatal function was indexed by a gene coding for DARPP-32, which is densely expressed in the striatum where it is necessary for synaptic plasticity. We found evidence for our hypothesis that variations in prefrontal dopamine relate to model-based learning, whereas variations in striatal dopamine function relate to model-free learning. SIGNIFICANCE STATEMENT Decisions can stem reflexively from their previously associated outcomes or flexibly from deliberative consideration of potential choice outcomes. Research implicates a dopamine-dependent striatal learning mechanism in the former type of choice. Although recent work has indicated that dopamine is also involved in flexible, goal-directed decision-making, it remains unclear whether it also contributes via striatum or via the dopamine-dependent working memory function of prefrontal cortex. We examined genetic indices of dopamine function in these regions and their relation to the two choice strategies. We found that striatal dopamine function related most clearly to the reflexive strategy, as previously shown, and that prefrontal dopamine related most clearly to the flexible strategy. These findings suggest that dissociable brain regions support dissociable choice strategies. PMID:26818509

  19. Two Strain Dengue Model with Temporary Cross Immunity and Seasonality

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastien; Stollenwerk, Nico

    2010-09-01

    Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.

  20. A novel simplified model for torsional vibration analysis of a series-parallel hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Tang, Xiaolin; Yang, Wei; Hu, Xiaosong; Zhang, Dejiu

    2017-02-01

    In this study, based on our previous work, a novel simplified torsional vibration dynamic model is established to study the torsional vibration characteristics of a compound planetary hybrid propulsion system. The main frequencies of the hybrid driveline are determined. In contrast to vibration characteristics of the previous 16-degree of freedom model, the simplified model can be used to accurately describe the low-frequency vibration property of this hybrid powertrain. This study provides a basis for further vibration control of the hybrid powertrain during the process of engine start/stop.

  1. Two Strain Dengue Model with Temporary Cross Immunity and Seasonality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguiar, Maira; Ballesteros, Sebastien; Stollenwerk, Nico

    Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.

  2. z'-BAND GROUND-BASED DETECTION OF THE SECONDARY ECLIPSE OF WASP-19b

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burton, J. R.; Watson, C. A.; Pollacco, D.

    2012-08-01

    We present the ground-based detection of the secondary eclipse of the transiting exoplanet WASP-19b. The observations were made in the Sloan z' band using the ULTRACAM triple-beam CCD camera mounted on the New Technology Telescope. The measurement shows a 0.088% {+-} 0.019% eclipse depth, matching previous predictions based on H- and K-band measurements. We discuss in detail our approach to the removal of errors arising due to systematics in the data set, in addition to fitting a model transit to our data. This fit returns an eclipse center, T{sub 0}, of 2455578.7676 HJD, consistent with a circular orbit. Our measurementmore » of the secondary eclipse depth is also compared to model atmospheres of WASP-19b and is found to be consistent with previous measurements at longer wavelengths for the model atmospheres we investigated.« less

  3. An error bound for a discrete reduced order model of a linear multivariable system

    NASA Technical Reports Server (NTRS)

    Al-Saggaf, Ubaid M.; Franklin, Gene F.

    1987-01-01

    The design of feasible controllers for high dimension multivariable systems can be greatly aided by a method of model reduction. In order for the design based on the order reduction to include a guarantee of stability, it is sufficient to have a bound on the model error. Previous work has provided such a bound for continuous-time systems for algorithms based on balancing. In this note an L-infinity bound is derived for model error for a method of order reduction of discrete linear multivariable systems based on balancing.

  4. PHYSIOLOGICALLY-BASED PHARMACOKINETIC ( PBPK ) MODEL FOR METHYL TERTIARY BUTYL ETHER ( MTBE ): A REVIEW OF EXISTING MODELS

    EPA Science Inventory

    MTBE is a volatile organic compound used as an oxygenate additive to gasoline, added to comply with the 1990 Clean Air Act. Previous PBPK models for MTBE were reviewed and incorporated into the Exposure Related Dose Estimating Model (ERDEM) software. This model also included an e...

  5. Programming biological models in Python using PySB.

    PubMed

    Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K

    2013-01-01

    Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis.

  6. Programming biological models in Python using PySB

    PubMed Central

    Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K

    2013-01-01

    Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis. PMID:23423320

  7. An ``Alternating-Curvature'' Model for the Nanometer-scale Structure of the Nafion Ionomer, Based on Backbone Properties Detected by NMR

    NASA Astrophysics Data System (ADS)

    Schmidt-Rohr, Klaus; Chen, Q.

    2006-03-01

    The perfluorinated ionomer, Nafion, which consists of a (-CF2-)n backbone and charged side branches, is useful as a proton exchange membrane in H2/O2 fuel cells. A modified model of the nanometer-scale structure of hydrated Nafion will be presented. It features hydrated ionic clusters familiar from some previous models, but is based most prominently on pronounced backbone rigidity between branch points and limited orientational correlation of local chain axes. These features have been revealed by solid-state NMR measurements, which take advantage of fast rotations of the backbones around their local axes. The resulting alternating curvature of the backbones towards the hydrated clusters also better satisfies the requirement of dense space filling in solids. Simulations based on this ``alternating curvature'' model reproduce orientational correlation data from NMR, as well as scattering features such as the ionomer peak and the I(q) ˜ 1/q power law at small q values, which can be attributed to modulated cylinders resulting from the chain stiffness. The shortcomings of previous models, including Gierke's cluster model and more recent lamellar or bundle models, in matching all requirements imposed by the experimental data will be discussed.

  8. Applying STAMP in Accident Analysis

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy; Daouk, Mirna; Dulac, Nicolas; Marais, Karen

    2003-01-01

    Accident models play a critical role in accident investigation and analysis. Most traditional models are based on an underlying chain of events. These models, however, have serious limitations when used for complex, socio-technical systems. Previously, Leveson proposed a new accident model (STAMP) based on system theory. In STAMP, the basic concept is not an event but a constraint. This paper shows how STAMP can be applied to accident analysis using three different views or models of the accident process and proposes a notation for describing this process.

  9. Implementation of a model based fault detection and diagnosis for actuation faults of the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.

    1992-01-01

    In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.

  10. Comparison between field mill and corona point instrumentation at Kennedy Space Center - Use of these data with a model to determine cloudbase electric fields

    NASA Technical Reports Server (NTRS)

    Markson, R.; Anderson, B.; Govaert, J.; Fairall, C. W.

    1989-01-01

    A novel coronal current-determining instrument is being used at NASA-KSC which overcomes previous difficulties with wind sensitivity and a voltage-threshold 'deadband'. The mounting of the corona needle at an elevated location reduces coronal and electrode layer space-charge influences on electric fields, rendering the measurement of space charge density possible. In conjunction with a space-charge compensation model, these features allow a more realistic estimation of cloud base electric fields and the potential for lightning strike than has previously been possible with ground-based sensors.

  11. A Bridge from Optical to Infrared Galaxies: Explaining Local Properties and Predicting Galaxy Counts and the Cosmic Background Radiation

    NASA Astrophysics Data System (ADS)

    Totani, Tomonori; Takeuchi, Tsutomu T.

    2002-05-01

    We give an explanation for the origin of various properties observed in local infrared galaxies and make predictions for galaxy counts and cosmic background radiation (CBR) using a new model extended from that for optical/near-infrared galaxies. Important new characteristics of this study are that (1) mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies and that (2) the large-grain dust temperature Tdust is calculated based on a physical consideration for energy balance rather than by using the empirical relation between Tdust and total infrared luminosity LIR found in local galaxies, which has been employed in most previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, LIR-Tdust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μm) and CBR using this model. We found results considerably different from those of most previous works based on the empirical LIR-Tdust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40-80 K), as often seen in starburst galaxies or ultraluminous infrared galaxies in the local and high-z universe. This indicates that intense starbursts of forming elliptical galaxies should have occurred at z~2-3, in contrast to the previous results that significant starbursts beyond z~1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE detections of FIR CBR. The intergalactic optical depth of TeV gamma rays based on our model is also presented.

  12. Energy Savings Forecast of Solid-State Lighting in General Illumination Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penning, Julie; Stober, Kelsey; Taylor, Victor

    2016-09-01

    The DOE report, Energy Savings Forecast of Solid-State Lighting in General Illumination Applications, is a biannual report which models the adoption of LEDs in the U.S. general-lighting market, along with associated energy savings, based on the full potential DOE has determined to be technically feasible over time. This version of the report uses an updated 2016 U.S. lighting-market model that is more finely calibrated and granular than previous models, and extends the forecast period to 2035 from the 2030 limit that was used in previous editions.

  13. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.

  14. Quality of asthma care under different primary care models in Canada: a population-based study.

    PubMed

    To, Teresa; Guan, Jun; Zhu, Jingqin; Lougheed, M Diane; Kaplan, Alan; Tamari, Itamar; Stanbrook, Matthew B; Simatovic, Jacqueline; Feldman, Laura; Gershon, Andrea S

    2015-02-14

    Previous research has shown variations in quality of care and patient outcomes under different primary care models. The objective of this study was to use previously validated, evidence-based performance indicators to measure quality of asthma care over time and to compare quality of care between different primary care models. Data were obtained for years 2006 to 2010 from the Ontario Asthma Surveillance Information System, which uses health administrative databases to track individuals with asthma living in the province of Ontario, Canada. Individuals with asthma (n=1,813,922) were divided into groups based on the practice model of their primary care provider (i.e., fee-for-service, blended fee-for-service, blended capitation). Quality of asthma care was measured using six validated, evidence-based asthma care performance indicators. All of the asthma performance indicators improved over time within each of the primary care models. Compared to the traditional fee-for-service model, the blended fee-for-service and blended capitation models had higher use of spirometry for asthma diagnosis and monitoring, higher rates of inhaled corticosteroid prescription, and lower outpatient claims. Emergency department visits were lowest in the blended fee-for-service group. Quality of asthma care improved over time within each of the primary care models. However, the amount by which they improved differed between the models. The newer primary care models (i.e., blended fee-for-service, blended capitation) appear to provide better quality of asthma care compared to the traditional fee-for-service model.

  15. Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones

    NASA Astrophysics Data System (ADS)

    Painter, S. L.; Coon, E. T.; Brooks, S. C.

    2017-12-01

    Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.

  16. Thermodynamics-based models of transcriptional regulation with gene sequence.

    PubMed

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  17. An agent-based computational model for tuberculosis spreading on age-structured populations

    NASA Astrophysics Data System (ADS)

    Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.

    2015-06-01

    In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.

  18. Inner Structure in the TW Hya Circumstellar Disk

    NASA Astrophysics Data System (ADS)

    Akeson, Rachel L.; Millan-Gabet, R.; Ciardi, D.; Boden, A.; Sargent, A.; Monnier, J.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.

    2011-05-01

    TW Hya is a nearby (50 pc) young stellar object with an estimated age of 10 Myr and signs of active accretion. Previous modeling of the circumstellar disk has shown that the inner disk contains optically thin material, placing this object in the class of "transition disks". We present new near-infrared interferometric observations of the disk material and use these data, as well as previously published, spatially resolved data at 10 microns and 7 mm, to constrain disk models based on a standard flared disk structure. Our model demonstrates that the constraints imposed by the spatially resolved data can be met with a physically plausible disk but this requires a disk containing not only an inner gap in the optically thick disk as previously suggested, but also some optically thick material within this gap. Our model is consistent with the suggestion by previous authors of a planet with an orbital radius of a few AU. This work was conducted at the NASA Exoplanet Science Institute, California Institute of Technology.

  19. Hierarchical lattice models of hydrogen-bond networks in water

    NASA Astrophysics Data System (ADS)

    Dandekar, Rahul; Hassanali, Ali A.

    2018-06-01

    We develop a graph-based model of the hydrogen-bond network in water, with a view toward quantitatively modeling the molecular-level correlational structure of the network. The networks formed are studied by the constructing the model on two infinite-dimensional lattices. Our models are built bottom up, based on microscopic information coming from atomistic simulations, and we show that the predictions of the model are consistent with known results from ab initio simulations of liquid water. We show that simple entropic models can predict the correlations and clustering of local-coordination defects around tetrahedral waters observed in the atomistic simulations. We also find that orientational correlations between bonds are longer ranged than density correlations, determine the directional correlations within closed loops, and show that the patterns of water wires within these structures are also consistent with previous atomistic simulations. Our models show the existence of density and compressibility anomalies, as seen in the real liquid, and the phase diagram of these models is consistent with the singularity-free scenario previously proposed by Sastry and coworkers [Phys. Rev. E 53, 6144 (1996), 10.1103/PhysRevE.53.6144].

  20. A Preliminary Validation of Attention, Relevance, Confidence and Satisfaction Model-Based Instructional Material Motivational Survey in a Computer-Based Tutorial Setting

    ERIC Educational Resources Information Center

    Huang, Wenhao; Huang, Wenyeh; Diefes-Dux, Heidi; Imbrie, Peter K.

    2006-01-01

    This paper describes a preliminary validation study of the Instructional Material Motivational Survey (IMMS) derived from the Attention, Relevance, Confidence and Satisfaction motivational design model. Previous studies related to the IMMS, however, suggest its practical application for motivational evaluation in various instructional settings…

  1. Imputatoin and Model-Based Updating Technique for Annual Forest Inventories

    Treesearch

    Ronald E. McRoberts

    2001-01-01

    The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...

  2. Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning.

    PubMed

    Doll, Bradley B; Bath, Kevin G; Daw, Nathaniel D; Frank, Michael J

    2016-01-27

    Considerable evidence suggests that multiple learning systems can drive behavior. Choice can proceed reflexively from previous actions and their associated outcomes, as captured by "model-free" learning algorithms, or flexibly from prospective consideration of outcomes that might occur, as captured by "model-based" learning algorithms. However, differential contributions of dopamine to these systems are poorly understood. Dopamine is widely thought to support model-free learning by modulating plasticity in striatum. Model-based learning may also be affected by these striatal effects, or by other dopaminergic effects elsewhere, notably on prefrontal working memory function. Indeed, prominent demonstrations linking striatal dopamine to putatively model-free learning did not rule out model-based effects, whereas other studies have reported dopaminergic modulation of verifiably model-based learning, but without distinguishing a prefrontal versus striatal locus. To clarify the relationships between dopamine, neural systems, and learning strategies, we combine a genetic association approach in humans with two well-studied reinforcement learning tasks: one isolating model-based from model-free behavior and the other sensitive to key aspects of striatal plasticity. Prefrontal function was indexed by a polymorphism in the COMT gene, differences of which reflect dopamine levels in the prefrontal cortex. This polymorphism has been associated with differences in prefrontal activity and working memory. Striatal function was indexed by a gene coding for DARPP-32, which is densely expressed in the striatum where it is necessary for synaptic plasticity. We found evidence for our hypothesis that variations in prefrontal dopamine relate to model-based learning, whereas variations in striatal dopamine function relate to model-free learning. Decisions can stem reflexively from their previously associated outcomes or flexibly from deliberative consideration of potential choice outcomes. Research implicates a dopamine-dependent striatal learning mechanism in the former type of choice. Although recent work has indicated that dopamine is also involved in flexible, goal-directed decision-making, it remains unclear whether it also contributes via striatum or via the dopamine-dependent working memory function of prefrontal cortex. We examined genetic indices of dopamine function in these regions and their relation to the two choice strategies. We found that striatal dopamine function related most clearly to the reflexive strategy, as previously shown, and that prefrontal dopamine related most clearly to the flexible strategy. These findings suggest that dissociable brain regions support dissociable choice strategies. Copyright © 2016 the authors 0270-6474/16/361211-12$15.00/0.

  3. Brief Report: Further Evidence of Sensory Subtypes in Autism

    ERIC Educational Resources Information Center

    Lane, Alison E.; Dennis, Simon J.; Geraghty, Maureen E.

    2011-01-01

    Distinct sensory processing (SP) subtypes in autism have been reported previously. This study sought to replicate the previous findings in an independent sample of thirty children diagnosed with an Autism Spectrum Disorder. Model-based cluster analysis of parent-reported sensory functioning (measured using the Short Sensory Profile) confirmed the…

  4. A quantile count model of water depth constraints on Cape Sable seaside sparrows

    USGS Publications Warehouse

    Cade, B.S.; Dong, Q.

    2008-01-01

    1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.

  5. QUANTITATIVE PROCEDURES FOR NEUROTOXICOLOGY RISK ASSESSMENT

    EPA Science Inventory

    In this project, previously published information on biologically based dose-response model for brain development was used to quantitatively evaluate critical neurodevelopmental processes, and to assess potential chemical impacts on early brain development. This model has been ex...

  6. Inequity aversion and the evolution of cooperation

    NASA Astrophysics Data System (ADS)

    Ahmed, Asrar; Karlapalem, Kamalakar

    2014-02-01

    Evolution of cooperation is a widely studied problem in biology, social science, economics, and artificial intelligence. Most of the existing approaches that explain cooperation rely on some notion of direct or indirect reciprocity. These reciprocity based models assume agents recognize their partner and know their previous interactions, which requires advanced cognitive abilities. In this paper we are interested in developing a model that produces cooperation without requiring any explicit memory of previous game plays. Our model is based on the notion of inequity aversion, a concept introduced within behavioral economics, whereby individuals care about payoff equality in outcomes. Here we explore the effect of using income inequality to guide partner selection and interaction. We study our model by considering both the well-mixed and the spatially structured population and present the conditions under which cooperation becomes dominant. Our results support the hypothesis that inequity aversion promotes cooperative relationship among nonkin.

  7. Inequity aversion and the evolution of cooperation.

    PubMed

    Ahmed, Asrar; Karlapalem, Kamalakar

    2014-02-01

    Evolution of cooperation is a widely studied problem in biology, social science, economics, and artificial intelligence. Most of the existing approaches that explain cooperation rely on some notion of direct or indirect reciprocity. These reciprocity based models assume agents recognize their partner and know their previous interactions, which requires advanced cognitive abilities. In this paper we are interested in developing a model that produces cooperation without requiring any explicit memory of previous game plays. Our model is based on the notion of inequity aversion, a concept introduced within behavioral economics, whereby individuals care about payoff equality in outcomes. Here we explore the effect of using income inequality to guide partner selection and interaction. We study our model by considering both the well-mixed and the spatially structured population and present the conditions under which cooperation becomes dominant. Our results support the hypothesis that inequity aversion promotes cooperative relationship among nonkin.

  8. The Model of Lake Operation in Water Transfer Projects Based on the Theory of Water- right

    NASA Astrophysics Data System (ADS)

    Bi-peng, Yan; Chao, Liu; Fang-ping, Tang

    the lake operation is a very important content in Water Transfer Projects. The previous studies have not any related to water-right and water- price previous. In this paper, water right is divided into three parts, one is initialization waterright, another is by investment, and the third is government's water- right re-distribution. The water-right distribution model is also build. After analyzing the cost in water transfer project, a model and computation method for the capacity price as well as quantity price is proposed. The model of lake operation in water transfer projects base on the theory of water- right is also build. The simulation regulation for the lake was carried out by using historical data and Genetic Algorithms. Water supply and impoundment control line of the lake was proposed. The result can be used by south to north water transfer projects.

  9. Understanding transparency perception in architecture: presentation of the simplified perforated model.

    PubMed

    Brzezicki, Marcin

    2013-01-01

    Issues of transparency perception are addressed from an architectural perspective, pointing out previously neglected factors that greatly influence this phenomenon in the scale of a building. The simplified perforated model of a transparent surface presented in the paper has been based on previously developed theories and involves the balance of light reflected versus light transmitted. Its aim is to facilitate an understanding of non-intuitive phenomena related to transparency (eg dynamically changing reflectance) for readers without advanced knowledge of molecular physics. A verification of the presented model has been based on the comparison of optical performance of the model with the results of Fresnel's equations for light-transmitting materials. The presented methodology is intended to be used both in the design and explanatory stages of architectural practice and vision research. Incorporation of architectural issues could enrich the perspective of scientists representing other disciplines.

  10. Developing a Learning Progression for Number Sense Based on the Rule Space Model in China

    ERIC Educational Resources Information Center

    Chen, Fu; Yan, Yue; Xin, Tao

    2017-01-01

    The current study focuses on developing the learning progression of number sense for primary school students, and it applies a cognitive diagnostic model, the rule space model, to data analysis. The rule space model analysis firstly extracted nine cognitive attributes and their hierarchy model from the analysis of previous research and the…

  11. Modifying climate change habitat models using tree species-specific assessments of model uncertainty and life history-factors

    Treesearch

    Stephen N. Matthews; Louis R. Iverson; Anantha M. Prasad; Matthew P. Peters; Paul G. Rodewald

    2011-01-01

    Species distribution models (SDMs) to evaluate trees' potential responses to climate change are essential for developing appropriate forest management strategies. However, there is a great need to better understand these models' limitations and evaluate their uncertainties. We have previously developed statistical models of suitable habitat, based on both...

  12. ESTIMATION OF THE RATE OF VOC EMISSIONS FROM SOLVENT-BASED INDOOR COATING MATERIALS BASED ON PRODUCT FORMULATION

    EPA Science Inventory

    Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...

  13. Dynamic model based on voltage transfer curve for pattern formation in dielectric barrier glow discharge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ben; He, Feng; Ouyang, Jiting, E-mail: jtouyang@bit.edu.cn

    2015-12-15

    Simulation work is very important for understanding the formation of self-organized discharge patterns. Previous works have witnessed different models derived from other systems for simulation of discharge pattern, but most of these models are complicated and time-consuming. In this paper, we introduce a convenient phenomenological dynamic model based on the basic dynamic process of glow discharge and the voltage transfer curve (VTC) to study the dielectric barrier glow discharge (DBGD) pattern. VTC is an important characteristic of DBGD, which plots the change of wall voltage after a discharge as a function of the initial total gap voltage. In the modeling,more » the combined effect of the discharge conditions is included in VTC, and the activation-inhibition effect is expressed by a spatial interaction term. Besides, the model reduces the dimensionality of the system by just considering the integration effect of current flow. All these greatly facilitate the construction of this model. Numerical simulations turn out to be in good accordance with our previous fluid modeling and experimental result.« less

  14. Inductive reasoning about causally transmitted properties.

    PubMed

    Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D; Tenenbaum, Joshua B

    2008-11-01

    Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates' context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.

  15. Are more complex physiological models of forest ecosystems better choices for plot and regional predictions?

    Treesearch

    Wenchi Jin; Hong S. He; Frank R. Thompson

    2016-01-01

    Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...

  16. Generating human-like movements on an anthropomorphic robot using an interior point method

    NASA Astrophysics Data System (ADS)

    Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.

    2013-10-01

    In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.

  17. Venus - Global gravity and topography

    NASA Technical Reports Server (NTRS)

    Mcnamee, J. B.; Borderies, N. J.; Sjogren, W. L.

    1993-01-01

    A new gravity field determination that has been produced combines both the Pioneer Venus Orbiter (PVO) and the Magellan Doppler radio data. Comparisonsbetween this estimate, a spherical harmonic model of degree and order 21, and previous models show that significant improvements have been made. Results are displayed as gravity contours overlaying a topographic map. We also calculate a new spherical harmonic model of topography based on Magellan altimetry, with PVO altimetry included where gaps exist in the Magellan data. This model is also of degree and order 21, so in conjunction with the gravity model, Bouguer and isostatic anomaly maps can be produced. These results are very consistent with previous results, but reveal more spatial resolution in the higher latitudes.

  18. Climate reconstruction from pollen and δ13C records using inverse vegetation modeling - Implication for past and future climates

    NASA Astrophysics Data System (ADS)

    Hatté, C.; Rousseau, D.-D.; Guiot, J.

    2009-04-01

    An improved inverse vegetation model has been designed to better specify both temperature and precipitation estimates from vegetation descriptions. It is based on the BIOME4 vegetation model and uses both vegetation δ13C and biome as constraints. Previous inverse models based on only one of the two proxies were already improvements over standard reconstruction methods such as the modern analog since these did not take into account some external forcings, for example CO2 concentration. This new approach makes it possible to describe a potential "isotopic niche" defined by analogy with the "climatic niche" theory. Boreal and temperate biomes simulated by BIOME4 are considered in this study. We demonstrate the impact of CO2 concentration on biome existence domains by replacing a "most likely biome" with another with increased CO2 concentration. Additionally, the climate imprint on δ13C between and within biomes is shown: the colder the biome, the lighter its potential isotopic niche; and the higher the precipitation, the lighter the δ13C. For paleoclimate purposes, previous inverse models based on either biome or δ13C did not allow informative paleoclimatic reconstructions of both precipitation and temperature. Application of the new approach to the Eemian of La Grande Pile palynological and geochemical records reduces the range in precipitation values by more than 50% reduces the range in temperatures by about 15% compared to previous inverse modeling approaches. This shows evidence of climate instabilities during Eemian period that can be correlated with independent continental and marine records.

  19. Climate reconstruction from pollen and δ13C using inverse vegetation modeling. Implication for past and future climates

    NASA Astrophysics Data System (ADS)

    Hatté, C.; Rousseau, D.-D.; Guiot, J.

    2009-01-01

    An improved inverse vegetation model has been designed to better specify both temperature and precipitation estimates from vegetation descriptions. It is based on the BIOME4 vegetation model and uses both vegetation δ13C and biome as constraints. Previous inverse models based on only one of the two proxies were already improvements over standard reconstruction methods such as the modern analog since these did not take into account some external forcings, for example CO2 concentration. This new approach makes it possible to describe a potential "isotopic niche" defined by analogy with the "climatic niche" theory. Boreal and temperate biomes simulated by BIOME4 are considered in this study. We demonstrate the impact of CO2 concentration on biome existence domains by replacing a "most likely biome" with another with increased CO2 concentration. Additionally, the climate imprint on δ13C between and within biomes is shown: the colder the biome, the lighter its potential isotopic niche; and the higher the precipitation, the lighter the δ13C. For paleoclimate purposes, previous inverse models based on either biome or δ13C did not allow informative paleoclimatic reconstructions of both precipitation and temperature. Application of the new approach to the Eemian of La Grande Pile palynological and geochemical records reduces the range in precipitation values by more than 50% reduces the range in temperatures by about 15% compared to previous inverse modeling approaches. This shows evidence of climate instabilities during Eemian period that can be correlated with independent continental and marine records.

  20. Models of performance of evolutionary program induction algorithms based on indicators of problem difficulty.

    PubMed

    Graff, Mario; Poli, Riccardo; Flores, Juan J

    2013-01-01

    Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.

  1. A processing architecture for associative short-term memory in electronic noses

    NASA Astrophysics Data System (ADS)

    Pioggia, G.; Ferro, M.; Di Francesco, F.; DeRossi, D.

    2006-11-01

    Electronic nose (e-nose) architectures usually consist of several modules that process various tasks such as control, data acquisition, data filtering, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, and fuzzy rules used to implement such tasks may lead to issues concerning module interconnection and cooperation. Moreover, a new learning phase is mandatory once new measurements have been added to the dataset, thus causing changes in the previously derived model. Consequently, if a loss in the previous learning occurs (catastrophic interference), real-time applications of e-noses are limited. To overcome these problems this paper presents an architecture for dynamic and efficient management of multi-transducer data processing techniques and for saving an associative short-term memory of the previously learned model. The architecture implements an artificial model of a hippocampus-based working memory, enabling the system to be ready for real-time applications. Starting from the base models available in the architecture core, dedicated models for neurons, maps and connections were tailored to an artificial olfactory system devoted to analysing olive oil. In order to verify the ability of the processing architecture in associative and short-term memory, a paired-associate learning test was applied. The avoidance of catastrophic interference was observed.

  2. Residual Risk Assessments

    EPA Science Inventory

    Each source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation. These assesments utilize existing models and data bases to examine the multi-media and multi-...

  3. Strained layer relaxation effect on current crowding and efficiency improvement of GaN based LED

    NASA Astrophysics Data System (ADS)

    Aurongzeb, Deeder

    2012-02-01

    Efficiency droop effect of GaN based LED at high power and high temperature is addressed by several groups based on career delocalization and photon recycling effect(radiative recombination). We extend the previous droop models to optical loss parameters. We correlate stained layer relaxation at high temperature and high current density to carrier delocalization. We propose a third order model and show that Shockley-Hall-Read and Auger recombination effect is not enough to account for the efficiency loss. Several strained layer modification scheme is proposed based on the model.

  4. Improved theory of time domain reflectometry with variable coaxial cable length for electrical conductivity measurements

    USDA-ARS?s Scientific Manuscript database

    Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...

  5. Prediction of pesticide acute toxicity using two-dimensional chemical descriptors and target species classification

    EPA Science Inventory

    Previous modelling of the median lethal dose (oral rat LD50) has indicated that local class-based models yield better correlations than global models. We evaluated the hypothesis that dividing the dataset by pesticidal mechanisms would improve prediction accuracy. A linear discri...

  6. Predicting Naming Latencies with an Analogical Model

    ERIC Educational Resources Information Center

    Chandler, Steve

    2008-01-01

    Skousen's (1989, Analogical modeling of language, Kluwer Academic Publishers, Dordrecht) Analogical Model (AM) predicts behavior such as spelling pronunciation by comparing the characteristics of a test item (a given input word) to those of individual exemplars in a data set of previously encountered items. While AM and other exemplar-based models…

  7. Wheat mill stream properties for discrete element method modeling

    USDA-ARS?s Scientific Manuscript database

    A discrete phase approach based on individual wheat kernel characteristics is needed to overcome the limitations of previous statistical models and accurately predict the milling behavior of wheat. As a first step to develop a discrete element method (DEM) model for the wheat milling process, this s...

  8. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture

    USDA-ARS?s Scientific Manuscript database

    Previously we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...

  9. Bromamine Decomposition Revisited: A Holistic Approach for Analyzing Acid and Base Catalysis Kinetics.

    PubMed

    Wahman, David G; Speitel, Gerald E; Katz, Lynn E

    2017-11-21

    Chloramine chemistry is complex, with a variety of reactions occurring in series and parallel and many that are acid or base catalyzed, resulting in numerous rate constants. Bromide presence increases system complexity even further with possible bromamine and bromochloramine formation. Therefore, techniques for parameter estimation must address this complexity through thoughtful experimental design and robust data analysis approaches. The current research outlines a rational basis for constrained data fitting using Brønsted theory, application of the microscopic reversibility principle to reversible acid or base catalyzed reactions, and characterization of the relative significance of parallel reactions using fictive product tracking. This holistic approach was used on a comprehensive and well-documented data set for bromamine decomposition, allowing new interpretations of existing data by revealing that a previously published reaction scheme was not robust; it was not able to describe monobromamine or dibromamine decay outside of the conditions for which it was calibrated. The current research's simplified model (3 reactions, 17 constants) represented the experimental data better than the previously published model (4 reactions, 28 constants). A final model evaluation was conducted based on representative drinking water conditions to determine a minimal model (3 reactions, 8 constants) applicable for drinking water conditions.

  10. Propagation of a Gaussian-beam wave in general anisotropic turbulence

    NASA Astrophysics Data System (ADS)

    Andrews, L. C.; Phillips, R. L.; Crabbs, R.

    2014-10-01

    Mathematical models for a Gaussian-beam wave propagating through anisotropic non-Kolmogorov turbulence have been developed in the past by several researchers. In previous publications, the anisotropic spatial power spectrum model was based on the assumption that propagation was in the z direction with circular symmetry maintained in the orthogonal xy-plane throughout the path. In the present analysis, however, the anisotropic spectrum model is no longer based on a single anisotropy parameter—instead, two such parameters are introduced in the orthogonal xyplane so that circular symmetry in this plane is no longer required. In addition, deviations from the 11/3 power-law behavior in the spectrum model are allowed by assuming power-law index variations 3 < α < 4 . In the current study we develop theoretical models for beam spot size, spatial coherence, and scintillation index that are valid in weak irradiance fluctuation regimes as well as in deep turbulence, or strong irradiance fluctuation regimes. These new results are compared with those derived from the more specialized anisotropic spectrum used in previous analyses.

  11. A historical reconstruction of ships' fuel consumption and emissions

    NASA Astrophysics Data System (ADS)

    Endresen, Øyvind; Sørgârd, Eirik; Behrens, Hanna Lee; Brett, Per Olaf; Isaksen, Ivar S. A.

    2007-06-01

    Shipping activity has increased considerably over the last century and currently represents a significant contribution to the global emissions of pollutants and greenhouse gases. Despite this, information about the historical development of fuel consumption and emissions is generally limited, with little data published pre-1950 and large deviations reported for estimates covering the last 3 decades. To better understand the historical development in ship emissions and the uncertainties associated with the estimates, we present fuel-based CO2 and SO2 emission inventories from 1925 up to 2002 and activity-based estimates from 1970 up to 2000. The global CO2 emissions from ships in 1925 have been estimated to 229 Tg (CO2), growing to about 634 Tg (CO2) in 2002. The corresponding SO2 emissions are about 2.5 Tg (SO2) and 8.5 Tg (SO2), respectively. Our activity-based estimates of fuel consumption from 1970 to 2000, covering all oceangoing civil ships above or equal to 100 gross tonnage (GT), are lower compared to previous activity-based studies. We have applied a more detailed model approach, which includes variation in the demand for sea transport, as well as operational and technological changes of the past. This study concludes that the main reason for the large deviations found in reported inventories is the applied number of days at sea. Moreover, our modeling indicates that the ship size and the degree of utilization of the fleet, combined with the shift to diesel engines, have been the major factors determining yearly fuel consumption. Interestingly, the model results from around 1973 suggest that the fleet growth is not necessarily followed by increased fuel consumption, as technical and operational characteristics have changed. Results from this study indicate that reported sales over the last 3 decades seems not to be significantly underreported as previous simplified activity-based studies have suggested. The results confirm our previously reported modeling estimates for year 2000. Previous activity-based studies have not considered ships less than 100 GT (e.g., today some 1.3 million fishing vessels), and we suggest that this fleet could account for an important part of the total fuel consumption (˜10%).

  12. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  13. Validity of thermally-driven small-scale ventilated filling box models

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie L.; Linden, P. F.

    2013-11-01

    The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.

  14. Mars Propellant Liquefaction and Storage Performance Modeling using Thermal Desktop with an Integrated Cryocooler Model

    NASA Technical Reports Server (NTRS)

    Desai, Pooja; Hauser, Dan; Sutherlin, Steven

    2017-01-01

    NASAs current Mars architectures are assuming the production and storage of 23 tons of liquid oxygen on the surface of Mars over a duration of 500+ days. In order to do this in a mass efficient manner, an energy efficient refrigeration system will be required. Based on previous analysis NASA has decided to do all liquefaction in the propulsion vehicle storage tanks. In order to allow for transient Martian environmental effects, a propellant liquefaction and storage system for a Mars Ascent Vehicle (MAV) was modeled using Thermal Desktop. The model consisted of a propellant tank containing a broad area cooling loop heat exchanger integrated with a reverse turbo Brayton cryocooler. Cryocooler sizing and performance modeling was conducted using MAV diurnal heat loads and radiator rejection temperatures predicted from a previous thermal model of the MAV. A system was also sized and modeled using an alternative heat rejection system that relies on a forced convection heat exchanger. Cryocooler mass, input power, and heat rejection for both systems were estimated and compared against sizing based on non-transient sizing estimates.

  15. Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlen, Oda, E-mail: oda.dahlen@ntnu.no; Erp, Titus S. van, E-mail: titus.van.erp@ntnu.no

    Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimentalmore » data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.« less

  16. Polyglutamine Disease Modeling: Epitope Based Screen for Homologous Recombination using CRISPR/Cas9 System.

    PubMed

    An, Mahru C; O'Brien, Robert N; Zhang, Ningzhe; Patra, Biranchi N; De La Cruz, Michael; Ray, Animesh; Ellerby, Lisa M

    2014-04-15

    We have previously reported the genetic correction of Huntington's disease (HD) patient-derived induced pluripotent stem cells using traditional homologous recombination (HR) approaches. To extend this work, we have adopted a CRISPR-based genome editing approach to improve the efficiency of recombination in order to generate allelic isogenic HD models in human cells. Incorporation of a rapid antibody-based screening approach to measure recombination provides a powerful method to determine relative efficiency of genome editing for modeling polyglutamine diseases or understanding factors that modulate CRISPR/Cas9 HR.

  17. Sustainable Street Vendors Spatial Zoning Models in Surakarta

    NASA Astrophysics Data System (ADS)

    Rahayu, M. J.; Putri, R. A.; Rini, E. F.

    2018-02-01

    Various strategies that have been carried out by Surakarta’s government to organize street vendors have not achieved the goal of street vendors’ arrangement comprehensively. The street vendors arrangement strategy consists of physical (spatial) and non-physical. One of the physical arrangements is to define the street vendor’s zoning. Based on the street vendors’ characteristics, there are two alternative locations of stabilization (as one kind of street vendors’ arrangement) that can be used. The aim of this study is to examine those alternative locations to set the street vendor’s zoning models. Quatitative method is used to formulate the spatial zoning model. The street vendor’s zoning models are formulated based on two approaches, which are the distance to their residences and previous trading locations. Geographic information system is used to indicate all street vendors’ residences and trading locations based on their type of goods. Through proximity point distance tool on ArcGIS, we find the closeness of residential location and previous trading location with the alternative location of street vendors’ stabilization. The result shows that the location was chosen by the street vendors to sell their goods mainly consider the proximity to their homes. It also shows street vendor’s zoning models which based on the type of street vendor’s goods.

  18. Identifying prebariatric subtypes based on temperament traits, emotion dysregulation, and disinhibited eating: A latent profile analysis.

    PubMed

    Schäfer, Lisa; Hübner, Claudia; Carus, Thomas; Herbig, Beate; Seyfried, Florian; Kaiser, Stefan; Schütz, Tatjana; Dietrich, Arne; Hilbert, Anja

    2017-10-01

    The efficacy of bariatric surgery has been proven; however, a subset of patients fails to achieve expected long-term weight loss postoperatively. As differences in surgery outcome may be influenced by heterogeneous psychological profiles in prebariatric patients, previous subtyping models differentiated patients based on temperament traits. The objective of this study was to expand these models by additionally considering emotion dysregulation and disinhibited eating behaviors for subtyping, as these factors were associated with maladaptive eating behaviors and poor postbariatric weight loss outcome. Within a prospective multicenter registry, N = 370 prebariatric patients were examined using interview and self-report questionnaires. A latent profile analysis was performed to identify subtypes based on temperament traits, emotion dysregulation, and disinhibited eating behaviors. Five prebariatric subtypes were identified with specific profiles regarding self-control, emotion dysregulation, and disinhibited eating behaviors. Subtypes were associated with different levels of eating disorder psychopathology, depression, and quality of life. The expanded model increased variance explanation compared to temperament-based models. By adding emotion dysregulation and disinhibited eating behaviors to previous subtyping models, specific prebariatric subtypes emerged with distinct psychological deficit patterns. Future investigations should test the predictive value of these subtypes for postbariatric weight loss and health-related outcomes. © 2017 Wiley Periodicals, Inc.

  19. Efficient reinforcement learning of a reservoir network model of parametric working memory achieved with a cluster population winner-take-all readout mechanism.

    PubMed

    Cheng, Zhenbo; Deng, Zhidong; Hu, Xiaolin; Zhang, Bo; Yang, Tianming

    2015-12-01

    The brain often has to make decisions based on information stored in working memory, but the neural circuitry underlying working memory is not fully understood. Many theoretical efforts have been focused on modeling the persistent delay period activity in the prefrontal areas that is believed to represent working memory. Recent experiments reveal that the delay period activity in the prefrontal cortex is neither static nor homogeneous as previously assumed. Models based on reservoir networks have been proposed to model such a dynamical activity pattern. The connections between neurons within a reservoir are random and do not require explicit tuning. Information storage does not depend on the stable states of the network. However, it is not clear how the encoded information can be retrieved for decision making with a biologically realistic algorithm. We therefore built a reservoir-based neural network to model the neuronal responses of the prefrontal cortex in a somatosensory delayed discrimination task. We first illustrate that the neurons in the reservoir exhibit a heterogeneous and dynamical delay period activity observed in previous experiments. Then we show that a cluster population circuit decodes the information from the reservoir with a winner-take-all mechanism and contributes to the decision making. Finally, we show that the model achieves a good performance rapidly by shaping only the readout with reinforcement learning. Our model reproduces important features of previous behavior and neurophysiology data. We illustrate for the first time how task-specific information stored in a reservoir network can be retrieved with a biologically plausible reinforcement learning training scheme. Copyright © 2015 the American Physiological Society.

  20. DOUBLE SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OGDEN DM; KIRCH NW

    2007-10-31

    This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed.

  1. None of the above: A Bayesian account of the detection of novel categories.

    PubMed

    Navarro, Daniel J; Kemp, Charles

    2017-10-01

    Every time we encounter a new object, action, or event, there is some chance that we will need to assign it to a novel category. We describe and evaluate a class of probabilistic models that detect when an object belongs to a category that has not previously been encountered. The models incorporate a prior distribution that is influenced by the distribution of previous objects among categories, and we present 2 experiments that demonstrate that people are also sensitive to this distributional information. Two additional experiments confirm that distributional information is combined with similarity when both sources of information are available. We compare our approach to previous models of unsupervised categorization and to several heuristic-based models, and find that a hierarchical Bayesian approach provides the best account of our data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. An enhanced beam model for constrained layer damping and a parameter study of damping contribution

    NASA Astrophysics Data System (ADS)

    Xie, Zhengchao; Shepard, W. Steve, Jr.

    2009-01-01

    An enhanced analytical model is presented based on an extension of previous models for constrained layer damping (CLD) in beam-like structures. Most existing CLD models are based on the assumption that shear deformation in the core layer is the only source of damping in the structure. However, previous research has shown that other types of deformation in the core layer, such as deformations from longitudinal extension and transverse compression, can also be important. In the enhanced analytical model developed here, shear, extension, and compression deformations are all included. This model can be used to predict the natural frequencies and modal loss factors. The numerical study shows that compared to other models, this enhanced model is accurate in predicting the dynamic characteristics. As a result, the model can be accepted as a general computation model. With all three types of damping included and the formulation used here, it is possible to study the impact of the structure's geometry and boundary conditions on the relative contribution of each type of damping. To that end, the relative contributions in the frequency domain for a few sample cases are presented.

  3. Compact continuum brain model for human electroencephalogram

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Shin, H.-B.; Robinson, P. A.

    2007-12-01

    A low-dimensional, compact brain model has recently been developed based on physiologically based mean-field continuum formulation of electric activity of the brain. The essential feature of the new compact model is a second order time-delayed differential equation that has physiologically plausible terms, such as rapid corticocortical feedback and delayed feedback via extracortical pathways. Due to its compact form, the model facilitates insight into complex brain dynamics via standard linear and nonlinear techniques. The model successfully reproduces many features of previous models and experiments. For example, experimentally observed typical rhythms of electroencephalogram (EEG) signals are reproduced in a physiologically plausible parameter region. In the nonlinear regime, onsets of seizures, which often develop into limit cycles, are illustrated by modulating model parameters. It is also shown that a hysteresis can occur when the system has multiple attractors. As a further illustration of this approach, power spectra of the model are fitted to those of sleep EEGs of two subjects (one with apnea, the other with narcolepsy). The model parameters obtained from the fittings show good matches with previous literature. Our results suggest that the compact model can provide a theoretical basis for analyzing complex EEG signals.

  4. A Review of the Korean Cultural Syndrome Hwa-Byung: Suggestions for Theory and Intervention

    PubMed Central

    Lee, Jieun; Wachholtz, Amy; Choi, Keum-Hyeong

    2014-01-01

    The purpose of this paper is to review Hwa-Byung, a cultural syndrome specific to Koreans and Korean immigrants. Hwa-Byung is a unique diagnosis and differs from other DSM disorders. However, Hwa-Byung has frequent comorbidity with other DSM disorders such as anger disorders, generalized anxiety disorder, and major depressive disorder. There are several risk factors for Hwa-Byung including psychosocial stress caused by marital conflicts and conflicts with their in-laws. Previous interventions of the Hwa-Byung syndrome were based primarily on the medical model. Therefore, based on previous research, we present a new ecological model of Hwa-Byung. We also recommend some areas of future research as well as present some limitations of our ecological model. Finally, we discuss some treatment issues, particularly for Korean women in the United States. PMID:25408922

  5. Improving Students' Understanding of Molecular Structure through Broad-Based Use of Computer Models in the Undergraduate Organic Chemistry Lecture

    ERIC Educational Resources Information Center

    Springer, Michael T.

    2014-01-01

    Several articles suggest how to incorporate computer models into the organic chemistry laboratory, but relatively few papers discuss how to incorporate these models broadly into the organic chemistry lecture. Previous research has suggested that "manipulating" physical or computer models enhances student understanding; this study…

  6. Method and apparatus for modeling interactions

    DOEpatents

    Xavier, Patrick G.

    2002-01-01

    The present invention provides a method and apparatus for modeling interactions that overcomes drawbacks. The method of the present invention comprises representing two bodies undergoing translations by two swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention is more robust and allows faster modeling than previous methods.

  7. Pure Misallocation of ''0'' in Number Transcoding: A New Symptom of Right Cerebral Dysfunction

    ERIC Educational Resources Information Center

    Furumoto, Hideharu

    2006-01-01

    To account for the mechanism of number transcoding, many authors have proposed various models, for example, semantic-abstract model, lexical-semantic model, triple-code model, and so on. However, almost all of them are based on the symptoms of patients with left cerebral damage. Previously, I reported two Japanese patients with right posterior…

  8. A multiscale strength model for tantalum over an extended range of strain rates

    NASA Astrophysics Data System (ADS)

    Barton, N. R.; Rhee, M.

    2013-09-01

    A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].

  9. Simplified ISCCP cloud regimes for evaluating cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2017-01-01

    We take advantage of ISCCP simulator data available for many models that participated in CMIP5, in order to introduce a framework for comparing model cloud output with corresponding ISCCP observations based on the cloud regime (CR) concept. Simplified global CRs are employed derived from the co-variations of three variables, namely cloud optical thickness, cloud top pressure and cloud fraction ( τ, p c , CF). Following evaluation criteria established in a companion paper of ours (Jin et al. 2016), we assess model cloud simulation performance based on how well the simplified CRs are simulated in terms of similarity of centroids, global values and map correlations of relative-frequency-of-occurrence, and long-term total cloud amounts. Mirroring prior results, modeled clouds tend to be too optically thick and not as extensive as in observations. CRs with high-altitude clouds from storm activity are not as well simulated here compared to the previous study, but other regimes containing near-overcast low clouds show improvement. Models that have performed well in the companion paper against CRs defined by joint τ- p c histograms distinguish themselves again here, but improvements for previously underperforming models are also seen. Averaging across models does not yield a drastically better picture, except for cloud geographical locations. Cloud evaluation with simplified regimes seems thus more forgiving than that using histogram-based CRs while still strict enough to reveal model weaknesses.

  10. An Earth-based Model of Microgravity Pulmonary Physiology

    NASA Technical Reports Server (NTRS)

    Hirschl, Ronald B.; Bull, Joseph L.; Grotberg, James B.

    2004-01-01

    There are currently only two practical methods of achieving microgravity for experimentation: parabolic flight in an aircraft or space flight, both of which have limitations. As a result, there are many important aspects of pulmonary physiology that have not been investigated in microgravity. We propose to develop an earth-based animal model of microgravity by using liquid ventilation, which will allow us to fill the lungs with perfluorocarbon, and submersing the animal in water such that the density of the lungs is the same as the surrounding environment. By so doing, we will eliminate the effects of gravity on respiration. We will first validate the model by comparing measures of pulmonary mechanics, to previous space flight and parabolic flight measurements. After validating the model, we will investigate the impact of microgravity on aspects of lung physiology that have not been previously measured. These will include pulmonary blood flow distribution, ventillation distribution, pulmonary capillary wedge pressure, ventilation-perfusion matching and pleural pressures and flows. We expect that this earth-based model of microgravity will enhance our knowledge and understanding of lung physiology in space which will increase in importance as space flights increase in time and distance.

  11. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  12. Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions

    PubMed Central

    Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas

    2012-01-01

    We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742

  13. A fuzzy case based reasoning tool for model based approach to rocket engine health monitoring

    NASA Technical Reports Server (NTRS)

    Krovvidy, Srinivas; Nolan, Adam; Hu, Yong-Lin; Wee, William G.

    1992-01-01

    In this system we develop a fuzzy case based reasoner that can build a case representation for several past anomalies detected, and we develop case retrieval methods that can be used to index a relevant case when a new problem (case) is presented using fuzzy sets. The choice of fuzzy sets is justified by the uncertain data. The new problem can be solved using knowledge of the model along with the old cases. This system can then be used to generalize the knowledge from previous cases and use this generalization to refine the existing model definition. This in turn can help to detect failures using the model based algorithms.

  14. VHDL-AMS modelling and simulation of a planar electrostatic micromotor

    NASA Astrophysics Data System (ADS)

    Endemaño, A.; Fourniols, J. Y.; Camon, H.; Marchese, A.; Muratet, S.; Bony, F.; Dunnigan, M.; Desmulliez, M. P. Y.; Overton, G.

    2003-09-01

    System level simulation results of a planar electrostatic micromotor, based on analytical models of the static and dynamic torque behaviours, are presented. A planar variable capacitance (VC) electrostatic micromotor designed, fabricated and tested at LAAS (Toulouse) in 1995 is simulated using the high level language VHDL-AMS (VHSIC (very high speed integrated circuits) hardware description language-analog mixed signal). The analytical torque model is obtained by first calculating the overlaps and capacitances between different electrodes based on a conformal mapping transformation. Capacitance values in the order of 10-16 F and torque values in the order of 10-11 N m have been calculated in agreement with previous measurements and simulations from this type of motor. A dynamic model has been developed for the motor by calculating the inertia coefficient and estimating the friction-coefficient-based values calculated previously for other similar devices. Starting voltage results obtained from experimental measurement are in good agreement with our proposed simulation model. Simulation results of starting voltage values, step response, switching response and continuous operation of the micromotor, based on the dynamic model of the torque, are also presented. Four VHDL-AMS blocks were created, validated and simulated for power supply, excitation control, micromotor torque creation and micromotor dynamics. These blocks can be considered as the initial phase towards the creation of intellectual property (IP) blocks for microsystems in general and electrostatic micromotors in particular.

  15. Machine Learning Model Analysis and Data Visualization with Small Molecules Tested in a Mouse Model of Mycobacterium tuberculosis Infection (2014–2015)

    PubMed Central

    2016-01-01

    The renewed urgency to develop new treatments for Mycobacterium tuberculosis (Mtb) infection has resulted in large-scale phenotypic screening and thousands of new active compounds in vitro. The next challenge is to identify candidates to pursue in a mouse in vivo efficacy model as a step to predicting clinical efficacy. We previously analyzed over 70 years of this mouse in vivo efficacy data, which we used to generate and validate machine learning models. Curation of 60 additional small molecules with in vivo data published in 2014 and 2015 was undertaken to further test these models. This represents a much larger test set than for the previous models. Several computational approaches have now been applied to analyze these molecules and compare their molecular properties beyond those attempted previously. Our previous machine learning models have been updated, and a novel aspect has been added in the form of mouse liver microsomal half-life (MLM t1/2) and in vitro-based Mtb models incorporating cytotoxicity data that were used to predict in vivo activity for comparison. Our best Mtbin vivo models possess fivefold ROC values > 0.7, sensitivity > 80%, and concordance > 60%, while the best specificity value is >40%. Use of an MLM t1/2 Bayesian model affords comparable results for scoring the 60 compounds tested. Combining MLM stability and in vitroMtb models in a novel consensus workflow in the best cases has a positive predicted value (hit rate) > 77%. Our results indicate that Bayesian models constructed with literature in vivoMtb data generated by different laboratories in various mouse models can have predictive value and may be used alongside MLM t1/2 and in vitro-based Mtb models to assist in selecting antitubercular compounds with desirable in vivo efficacy. We demonstrate for the first time that consensus models of any kind can be used to predict in vivo activity for Mtb. In addition, we describe a new clustering method for data visualization and apply this to the in vivo training and test data, ultimately making the method accessible in a mobile app. PMID:27335215

  16. Improving the efficiency of a user-driven learning system with reconfigurable hardware. Application to DNA splicing.

    PubMed

    Lemoine, E; Merceron, D; Sallantin, J; Nguifo, E M

    1999-01-01

    This paper describes a new approach to problem solving by splitting up problem component parts between software and hardware. Our main idea arises from the combination of two previously published works. The first one proposed a conceptual environment of concept modelling in which the machine and the human expert interact. The second one reported an algorithm based on reconfigurable hardware system which outperforms any kind of previously published genetic data base scanning hardware or algorithms. Here we show how efficient the interaction between the machine and the expert is when the concept modelling is based on reconfigurable hardware system. Their cooperation is thus achieved with an real time interaction speed. The designed system has been partially applied to the recognition of primate splice junctions sites in genetic sequences.

  17. Struggling To Understand Abstract Science Topics: A Roundhouse Diagram-Based Study.

    ERIC Educational Resources Information Center

    Ward, Robin E.; Wandersee, James H.

    2002-01-01

    Explores the effects of Roundhouse diagram construction on a previously low-performing middle school science student's struggles to understand abstract science concepts and principles. Based on a metacognition-based visual learning model, aims to elucidate the process by which Roundhouse diagramming helps learners bootstrap their current…

  18. A novel iterative mixed model to remap three complex orthopedic traits in dogs

    PubMed Central

    Huang, Meng; Hayward, Jessica J.; Corey, Elizabeth; Garrison, Susan J.; Wagner, Gabriela R.; Krotscheck, Ursula; Hayashi, Kei; Schweitzer, Peter A.; Lust, George; Boyko, Adam R.; Todhunter, Rory J.

    2017-01-01

    Hip dysplasia (HD), elbow dysplasia (ED), and rupture of the cranial (anterior) cruciate ligament (RCCL) are the most common complex orthopedic traits of dogs and all result in debilitating osteoarthritis. We reanalyzed previously reported data: the Norberg angle (a quantitative measure of HD) in 921 dogs, ED in 113 cases and 633 controls, and RCCL in 271 cases and 399 controls and their genotypes at ~185,000 single nucleotide polymorphisms. A novel fixed and random model with a circulating probability unification (FarmCPU) function, with marker-based principal components and a kinship matrix to correct for population stratification, was used. A Bonferroni correction at p<0.01 resulted in a P< 6.96 ×10−8. Six loci were identified; three for HD and three for RCCL. An associated locus at CFA28:34,369,342 for HD was described previously in the same dogs using a conventional mixed model. No loci were identified for RCCL in the previous report but the two loci for ED in the previous report did not reach genome-wide significance using the FarmCPU model. These results were supported by simulation which demonstrated that the FarmCPU held no power advantage over the linear mixed model for the ED sample but provided additional power for the HD and RCCL samples. Candidate genes for HD and RCCL are discussed. When using FarmCPU software, we recommend a resampling test, that a positive control be used to determine the optimum pseudo quantitative trait nucleotide-based covariate structure of the model, and a negative control be used consisting of permutation testing and the identical resampling test as for the non-permuted phenotypes. PMID:28614352

  19. Ghosts in the Machine II: Neural Correlates of Memory Interference from the Previous Trial.

    PubMed

    Papadimitriou, Charalampos; White, Robert L; Snyder, Lawrence H

    2017-04-01

    Previous memoranda interfere with working memory. For example, spatial memories are biased toward locations memorized on the previous trial. We predicted, based on attractor network models of memory, that activity in the frontal eye fields (FEFs) encoding a previous target location can persist into the subsequent trial and that this ghost will then bias the readout of the current target. Contrary to this prediction, we find that FEF memory representations appear biased away from (not toward) the previous target location. The behavioral and neural data can be reconciled by a model in which receptive fields of memory neurons converge toward remembered locations, much as receptive fields converge toward attended locations. Convergence increases the resources available to encode the relevant memoranda and decreases overall error in the network, but the residual convergence from the previous trial can give rise to an attractive behavioral bias on the next trial. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Predicting chemical bioavailability using microarray gene expression data and regression modeling: A tale of three explosive compounds.

    PubMed

    Gong, Ping; Nan, Xiaofei; Barker, Natalie D; Boyd, Robert E; Chen, Yixin; Wilkins, Dawn E; Johnson, David R; Suedel, Burton C; Perkins, Edward J

    2016-03-08

    Chemical bioavailability is an important dose metric in environmental risk assessment. Although many approaches have been used to evaluate bioavailability, not a single approach is free from limitations. Previously, we developed a new genomics-based approach that integrated microarray technology and regression modeling for predicting bioavailability (tissue residue) of explosives compounds in exposed earthworms. In the present study, we further compared 18 different regression models and performed variable selection simultaneously with parameter estimation. This refined approach was applied to both previously collected and newly acquired earthworm microarray gene expression datasets for three explosive compounds. Our results demonstrate that a prediction accuracy of R(2) = 0.71-0.82 was achievable at a relatively low model complexity with as few as 3-10 predictor genes per model. These results are much more encouraging than our previous ones. This study has demonstrated that our approach is promising for bioavailability measurement, which warrants further studies of mixed contamination scenarios in field settings.

  1. Anisotropic models of the upper mantle

    NASA Technical Reports Server (NTRS)

    Regan, J.; Anderson, D. L.

    1983-01-01

    Long period Rayleigh wave and Love wave dispersion data, particularly for oceanic areas, were not simultaneously satisfied by an isotropic structure. Available phase and group velocity data are inverted by a procedure which includes the effects of transverse anisotropy, an elastic dispersion, sphericity, and gravity. The resulting models, for the average Earth, average ocean and oceanic regions divided according to the age of the ocean floor, are quite different from previous results which ignore the above effects. The models show a low velocity zone with age dependent anisotropy and velocities higher than derived in previous surface wave studies. The correspondence between the anisotropy variation with age and a physical model based on flow aligned olivine is suggested.

  2. Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, N. C.; Taylor, P. C.

    2014-12-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.

  3. SEASONAL NH 3 EMISSIONS FOR THE CONTINENTAL UNITED STATES: INVERSE MODEL ESTIMATION AND EVALUATION

    EPA Science Inventory

    An inverse modeling study has been conducted here to evaluate a prior estimate of seasonal ammonia (NH3) emissions. The prior estimates were based on a previous inverse modeling study and two other bottom-up inventory studies. The results suggest that the prior estim...

  4. The Relationship between Reciprocity and the Emotional and Behavioural Responses of Staff

    ERIC Educational Resources Information Center

    Thomas, Cathryn; Rose, John

    2010-01-01

    Background: The current study examines a model relating to the concept of reciprocity and burnout in staff, incorporating previous research findings based upon Weiner's (1980, 1986) cognitive-emotional model linking emotions, optimism and helping behaviour, with the aim of testing the model. Materials: Staff working in community homes within the…

  5. The Sherborne Developmental Movement (SDM) Teaching Model for Pre-Service Teachers

    ERIC Educational Resources Information Center

    Hen, Meirav; Walter, Ofra

    2012-01-01

    Previously, the Sherborne Developmental Movement (SDM) has been found to contribute to the development of emotional competencies in higher education. This study presents and evaluates a teaching model based on SDM for the development of emotional competencies in teacher education. The study examined the contributions of this model to the increase…

  6. The study of human venous system dynamics using hybrid computer modeling

    NASA Technical Reports Server (NTRS)

    Snyder, M. F.; Rideout, V. C.

    1972-01-01

    A computer-based model of the cardiovascular system was created emphasizing effects on the systemic venous system. Certain physiological aspects were emphasized: effects of heart rate, tilting, changes in respiration, and leg muscular contractions. The results from the model showed close correlation with findings previously reported in the literature.

  7. Mars Propellant Liquefaction Modeling in Thermal Desktop

    NASA Technical Reports Server (NTRS)

    Desai, Pooja; Hauser, Dan; Sutherlin, Steven

    2017-01-01

    NASAs current Mars architectures are assuming the production and storage of 23 tons of liquid oxygen on the surface of Mars over a duration of 500+ days. In order to do this in a mass efficient manner, an energy efficient refrigeration system will be required. Based on previous analysis NASA has decided to do all liquefaction in the propulsion vehicle storage tanks. In order to allow for transient Martian environmental effects, a propellant liquefaction and storage system for a Mars Ascent Vehicle (MAV) was modeled using Thermal Desktop. The model consisted of a propellant tank containing a broad area cooling loop heat exchanger integrated with a reverse turbo Brayton cryocooler. Cryocooler sizing and performance modeling was conducted using MAV diurnal heat loads and radiator rejection temperatures predicted from a previous thermal model of the MAV. A system was also sized and modeled using an alternative heat rejection system that relies on a forced convection heat exchanger. Cryocooler mass, input power, and heat rejection for both systems were estimated and compared against sizing based on non-transient sizing estimates.

  8. A novel life cycle modeling system for Ebola virus shows a genome length-dependent role of VP24 in virus infectivity.

    PubMed

    Watt, Ari; Moukambi, Felicien; Banadyga, Logan; Groseth, Allison; Callison, Julie; Herwig, Astrid; Ebihara, Hideki; Feldmann, Heinz; Hoenen, Thomas

    2014-09-01

    Work with infectious Ebola viruses is restricted to biosafety level 4 (BSL4) laboratories, presenting a significant barrier for studying these viruses. Life cycle modeling systems, including minigenome systems and transcription- and replication-competent virus-like particle (trVLP) systems, allow modeling of the virus life cycle under BSL2 conditions; however, all current systems model only certain aspects of the virus life cycle, rely on plasmid-based viral protein expression, and have been used to model only single infectious cycles. We have developed a novel life cycle modeling system allowing continuous passaging of infectious trVLPs containing a tetracistronic minigenome that encodes a reporter and the viral proteins VP40, VP24, and GP1,2. This system is ideally suited for studying morphogenesis, budding, and entry, in addition to genome replication and transcription. Importantly, the specific infectivity of trVLPs in this system was ∼ 500-fold higher than that in previous systems. Using this system for functional studies of VP24, we showed that, contrary to previous reports, VP24 only very modestly inhibits genome replication and transcription when expressed in a regulated fashion, which we confirmed using infectious Ebola viruses. Interestingly, we also discovered a genome length-dependent effect of VP24 on particle infectivity, which was previously undetected due to the short length of monocistronic minigenomes and which is due at least partially to a previously unknown function of VP24 in RNA packaging. Based on our findings, we propose a model for the function of VP24 that reconciles all currently available data regarding the role of VP24 in nucleocapsid assembly as well as genome replication and transcription. Ebola viruses cause severe hemorrhagic fevers in humans, with no countermeasures currently being available, and must be studied in maximum-containment laboratories. Only a few of these laboratories exist worldwide, limiting our ability to study Ebola viruses and develop countermeasures. Here we report the development of a novel reverse genetics-based system that allows the study of Ebola viruses without maximum-containment laboratories. We used this system to investigate the Ebola virus protein VP24, showing that, contrary to previous reports, it only modestly inhibits virus genome replication and transcription but is important for packaging of genomes into virus particles, which constitutes a previously unknown function of VP24 and a potential antiviral target. We further propose a comprehensive model for the function of VP24 in nucleocapsid assembly. Importantly, on the basis of this approach, it should easily be possible to develop similar experimental systems for other viruses that are currently restricted to maximum-containment laboratories. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  9. Atmospheric Dispersion Modeling of the February 2014 Waste Isolation Pilot Plant Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasstrom, John; Piggott, Tom; Simpson, Matthew

    2015-07-22

    This report presents the results of a simulation of the atmospheric dispersion and deposition of radioactivity released from the Waste Isolation Pilot Plant (WIPP) site in New Mexico in February 2014. These simulations were made by the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL), and supersede NARAC simulation results published in a previous WIPP report (WIPP, 2014). The results presented in this report use additional, more detailed data from WIPP on the specific radionuclides released, radioactivity release amounts and release times. Compared to the previous NARAC simulations, the new simulation results in this report aremore » based on more detailed modeling of the winds, turbulence, and particle dry deposition. In addition, the initial plume rise from the exhaust vent was considered in the new simulations, but not in the previous NARAC simulations. The new model results show some small differences compared to previous results, but do not change the conclusions in the WIPP (2014) report. Presented are the data and assumptions used in these model simulations, as well as the model-predicted dose and deposition on and near the WIPP site. A comparison of predicted and measured radionuclide-specific air concentrations is also presented.« less

  10. Replicating Health Economic Models: Firm Foundations or a House of Cards?

    PubMed

    Bermejo, Inigo; Tappenden, Paul; Youn, Ji-Hee

    2017-11-01

    Health economic evaluation is a framework for the comparative analysis of the incremental health gains and costs associated with competing decision alternatives. The process of developing health economic models is usually complex, financially expensive and time-consuming. For these reasons, model development is sometimes based on previous model-based analyses; this endeavour is usually referred to as model replication. Such model replication activity may involve the comprehensive reproduction of an existing model or 'borrowing' all or part of a previously developed model structure. Generally speaking, the replication of an existing model may require substantially less effort than developing a new de novo model by bypassing, or undertaking in only a perfunctory manner, certain aspects of model development such as the development of a complete conceptual model and/or comprehensive literature searching for model parameters. A further motivation for model replication may be to draw on the credibility or prestige of previous analyses that have been published and/or used to inform decision making. The acceptability and appropriateness of replicating models depends on the decision-making context: there exists a trade-off between the 'savings' afforded by model replication and the potential 'costs' associated with reduced model credibility due to the omission of certain stages of model development. This paper provides an overview of the different levels of, and motivations for, replicating health economic models, and discusses the advantages, disadvantages and caveats associated with this type of modelling activity. Irrespective of whether replicated models should be considered appropriate or not, complete replicability is generally accepted as a desirable property of health economic models, as reflected in critical appraisal checklists and good practice guidelines. To this end, the feasibility of comprehensive model replication is explored empirically across a small number of recent case studies. Recommendations are put forward for improving reporting standards to enhance comprehensive model replicability.

  11. Model-based choices involve prospective neural activity

    PubMed Central

    Doll, Bradley B.; Duncan, Katherine D.; Simon, Dylan A.; Shohamy, Daphna; Daw, Nathaniel D.

    2015-01-01

    Decisions may arise via “model-free” repetition of previously reinforced actions, or by “model-based” evaluation, which is widely thought to follow from prospective anticipation of action consequences using a learned map or model. While choices and neural correlates of decision variables sometimes reflect knowledge of their consequences, it remains unclear whether this actually arises from prospective evaluation. Using functional MRI and a sequential reward-learning task in which paths contained decodable object categories, we found that humans’ model-based choices were associated with neural signatures of future paths observed at decision time, suggesting a prospective mechanism for choice. Prospection also covaried with the degree of model-based influences on neural correlates of decision variables, and was inversely related to prediction error signals thought to underlie model-free learning. These results dissociate separate mechanisms underlying model-based and model-free evaluation and support the hypothesis that model-based influences on choices and neural decision variables result from prospection. PMID:25799041

  12. Crop weather models of barley and spring wheat yield for agrophysical units in North Dakota

    NASA Technical Reports Server (NTRS)

    Leduc, S. (Principal Investigator)

    1982-01-01

    Models based on multiple regression were developed to estimate barley yield and spring wheat yield from weather data for Agrophysical units(APU) in North Dakota. The predictor variables are derived from monthly average temperature and monthly total precipitation data at meteorological stations in the cooperative network. The models are similar in form to the previous models developed for Crop Reporting Districts (CRD). The trends and derived variables were the same and the approach to select the significant predictors was similar to that used in developing the CRD models. The APU models show sight improvements in some of the statistics of the models, e.g., explained variation. These models are to be independently evaluated and compared to the previously evaluated CRD models. The comparison will indicate the preferred model area for this application, i.e., APU or CRD.

  13. Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Malsbury, T.; Atencio, A., Jr.

    1992-01-01

    A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.

  14. A comparative study of theoretical graph models for characterizing structural networks of human brain.

    PubMed

    Li, Xiaojin; Hu, Xintao; Jin, Changfeng; Han, Junwei; Liu, Tianming; Guo, Lei; Hao, Wei; Li, Lingjiang

    2013-01-01

    Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs) are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL) to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI) data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY) and scale-free gene duplication model (SF-GD), that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  15. Theory of planned behavior-based models for breastfeeding duration among Hong Kong mothers.

    PubMed

    Dodgson, Joan E; Henly, Susan J; Duckett, Laura; Tarrant, Marie

    2003-01-01

    The theory of planned behavior (TPB) has been used to explain breastfeeding behaviors in Western cultures. Theoretically-based investigations in other groups are sparse. To evaluate cross-cultural application of TPB-based models for breastfeeding duration among new mothers in Hong Kong. First-time breastfeeding mothers (N = 209) with healthy newborns provided self-reports of TPB predictor variables during postpartum hospitalization and information about breastfeeding experiences at 1, 3, 6, 9, and 12 months postdelivery or until they weaned. Three predictive models were proposed: (a) a strict interpretation of the TPB with two added proximal predictors of breastfeeding duration; (b) a replication with modification of the TPB-based model for more fully employed breastfeeding mothers from a previous study (Duckett et al., 1998); and (c) a model that posited perceived control (PC) as a mediating factor linking TPB motivational variables for breastfeeding with breastfeeding intentions and behavior. LISREL was used for the structural equation modeling analyses. Explained variance in PC and duration was high in all models. Overall fit of the strict TPB model was poor (GOFI = 0.85). The TPB for breastfeeding employed women and the PC-mediated models fit equally well (GOFI = 0.94; 0.95) and residuals were small (RMSR = 0.07). All hypothesized paths in the PC-mediated model were significant (p <.05); explained variance was 0.40 for perceived control and 0.36 for breastfeeding duration. Models were interpreted in light of the TPB, previous findings, the social context for breastfeeding in Hong Kong, and statistical model-building. Cross-cultural measurement issues and the need for prospective designs are continuing challenges in breastfeeding research.

  16. ArgoEcoSystem-watershed (AgES-W) model evaluation for streamflow and nitrogen/sediment dynamics on a midwest agricultural watershed

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components under the Object Modeling System Version 3 (OMS3). The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the ad...

  17. A Comprehensive Model for Developing and Evaluating Study Abroad Programs in Counselor Education

    ERIC Educational Resources Information Center

    Santos, Syntia Dinora

    2014-01-01

    This paper introduces a model to guide the process of designing and evaluating study abroad programs, addressing particular stages and influential factors. The main purpose of the model is to serve as a basic structure for those who want to develop their own program or evaluate previous cultural immersion experiences. The model is based on the…

  18. A New Model for Simulating Gas Metal Arc Welding based on Phase Field Model

    NASA Astrophysics Data System (ADS)

    Jiang, Yongyue; Li, Li; Zhao, Zhijiang

    2017-11-01

    Lots of physical process, such as metal melting, multiphase fluids flow, heat and mass transfer and thermocapillary effect (Marangoni) and so on, will occur in gas metal arc welding (GMAW) which should be considered as a mixture system. In this paper, based on the previous work, we propose a new model to simulate GMAW including Navier-Stokes equation, the phase field model and energy equation. Unlike most previous work, we take the thermocapillary effect into the phase field model considering mixture energy which is different of volume of fluid method (VOF) widely used in GMAW before. We also consider gravity, electromagnetic force, surface tension, buoyancy effect and arc pressure in momentum equation. The spray transfer especially the projected transfer in GMAW is computed as numerical examples with a continuous finite element method and a modified midpoint scheme. Pulse current is set as welding current as the numerical example to show the numerical simulation of metal transfer which fits the theory of GMAW well. From the result compared with the data of high-speed photography and VOF model, the accuracy and stability of the model and scheme are easily validated and also the new model has the higher precieion.

  19. CABS-fold: Server for the de novo and consensus-based prediction of protein structure.

    PubMed

    Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej

    2013-07-01

    The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold.

  20. CABS-fold: server for the de novo and consensus-based prediction of protein structure

    PubMed Central

    Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej

    2013-01-01

    The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold. PMID:23748950

  1. From Agents to Continuous Change via Aesthetics: Learning Mechanics with Visual Agent-Based Computational Modeling

    ERIC Educational Resources Information Center

    Sengupta, Pratim; Farris, Amy Voss; Wright, Mason

    2012-01-01

    Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…

  2. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. The space-dependent model and output characteristics of intra-cavity pumped dual-wavelength lasers

    NASA Astrophysics Data System (ADS)

    He, Jin-Qi; Dong, Yuan; Zhang, Feng-Dong; Yu, Yong-Ji; Jin, Guang-Yong; Liu, Li-Da

    2016-01-01

    The intra-cavity pumping scheme which is used to simultaneously generate dual-wavelength lasers was proposed and published by us and the space-independent model of quasi-three-level and four-level intra-cavity pumped dual-wavelength lasers was constructed based on this scheme. In this paper, to make the previous study more rigorous, the space-dependent model is adopted. As an example, the output characteristics of 946 nm and 1064 nm dual-wavelength lasers under the conditions of different output mirror transmittances are numerically simulated by using the derived formula and the results are nearly identical to what was previously reported.

  4. Convergence of methods for coupling of microscopic and mesoscopic reaction-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Flegg, Mark B.; Hellander, Stefan; Erban, Radek

    2015-05-01

    In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is called the ghost cell method (GCM), since it works by constructing a "ghost cell" in which molecules can disappear and jump into the compartment-based simulation. Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step Δt (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter h, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered: Δt → 0 and h is fixed; Δt → 0 and h → 0 such that √{ Δt } / h is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model.

  5. ECHO: A reference-free short-read error correction algorithm

    PubMed Central

    Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.

    2011-01-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625

  6. REFINED PBPK MODEL OF AGGREGATE EXPOSURE TO METHYL TERTIARY-BUTYL ETHER

    EPA Science Inventory

    Aggregate (multiple pathway) exposures to methyl tertiary-butyl ether (MTBE) in air and water occur via dermal, inhalation, and oral routes. Previously, physiologically-based pharmacokinetic (PBPK) models have been used to quantify the kinetic behavior of MTBE and its primary met...

  7. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.

  8. Timing fungicide application intervals based on airborne Erysiphe necator concentrations

    USDA-ARS?s Scientific Manuscript database

    Management of grape powdery mildew (Erysiphe necator) and other polycyclic diseases relies on numerous fungicide applications that follow a calendar or model-based application intervals, both of which assume that inoculum is always present. Quantitative molecular assays have been previously develope...

  9. Gryphon: A Hybrid Agent-Based Modeling and Simulation Platform for Infectious Diseases

    NASA Astrophysics Data System (ADS)

    Yu, Bin; Wang, Jijun; McGowan, Michael; Vaidyanathan, Ganesh; Younger, Kristofer

    In this paper we present Gryphon, a hybrid agent-based stochastic modeling and simulation platform developed for characterizing the geographic spread of infectious diseases and the effects of interventions. We study both local and non-local transmission dynamics of stochastic simulations based on the published parameters and data for SARS. The results suggest that the expected numbers of infections and the timeline of control strategies predicted by our stochastic model are in reasonably good agreement with previous studies. These preliminary results indicate that Gryphon is able to characterize other future infectious diseases and identify endangered regions in advance.

  10. Macroscopic neural mass model constructed from a current-based network model of spiking neurons.

    PubMed

    Umehara, Hiroaki; Okada, Masato; Teramae, Jun-Nosuke; Naruse, Yasushi

    2017-02-01

    Neural mass models (NMMs) are efficient frameworks for describing macroscopic cortical dynamics including electroencephalogram and magnetoencephalogram signals. Originally, these models were formulated on an empirical basis of synaptic dynamics with relatively long time constants. By clarifying the relations between NMMs and the dynamics of microscopic structures such as neurons and synapses, we can better understand cortical and neural mechanisms from a multi-scale perspective. In a previous study, the NMMs were analytically derived by averaging the equations of synaptic dynamics over the neurons in the population and further averaging the equations of the membrane-potential dynamics. However, the averaging of synaptic current assumes that the neuron membrane potentials are nearly time invariant and that they remain at sub-threshold levels to retain the conductance-based model. This approximation limits the NMM to the non-firing state. In the present study, we newly propose a derivation of a NMM by alternatively approximating the synaptic current which is assumed to be independent of the membrane potential, thus adopting a current-based model. Our proposed model releases the constraint of the nearly constant membrane potential. We confirm that the obtained model is reducible to the previous model in the non-firing situation and that it reproduces the temporal mean values and relative power spectrum densities of the average membrane potentials for the spiking neurons. It is further ensured that the existing NMM properly models the averaged dynamics over individual neurons even if they are spiking in the populations.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, Peter

    An improved microscopic cleavage model, based on a Morse-type and Lennard-Jones-type interaction instead of the previously employed half-sine function, is used to determine the maximum cleavage strength for the brittle materials diamond, tungsten, molybdenum, silicon, GaAs, silica, and graphite. The results of both interaction potentials are in much better agreement with the theoretical strength values obtained by ab initio calculations for diamond, tungsten, molybdenum, and silicon than the previous model. Reasonable estimates of the intrinsic strength are presented for GaAs, silica, and graphite, where first principles values are not available.

  12. An improved classification tree analysis of high cost modules based upon an axiomatic definition of complexity

    NASA Technical Reports Server (NTRS)

    Tian, Jianhui; Porter, Adam; Zelkowitz, Marvin V.

    1992-01-01

    Identification of high cost modules has been viewed as one mechanism to improve overall system reliability, since such modules tend to produce more than their share of problems. A decision tree model was used to identify such modules. In this current paper, a previously developed axiomatic model of program complexity is merged with the previously developed decision tree process for an improvement in the ability to identify such modules. This improvement was tested using data from the NASA Software Engineering Laboratory.

  13. GEO Collisional Risk Assessment Based on Analysis of NASA-WISE Data and Modeling

    DTIC Science & Technology

    2015-10-18

    GEO Collisional Risk Assessment Based on Analysis of NASA -WISE Data and Modeling Jeremy Murray Krezan1, Samantha Howard1, Phan D. Dao1, Derek...Surka2 1AFRL Space Vehicles Directorate,2Applied Technology Associates Incorporated From December 2009 through 2011 the NASA Wide-Field Infrared...of known debris. The NASA -WISE GEO belt debris population adds potentially thousands previously uncataloged objects. This paper describes

  14. Integrated Formulation of Beacon-Based Exception Analysis for Multimissions

    NASA Technical Reports Server (NTRS)

    Mackey, Ryan; James, Mark; Park, Han; Zak, Mickail

    2003-01-01

    Further work on beacon-based exception analysis for multimissions (BEAM), a method of real-time, automated diagnosis of a complex electromechanical systems, has greatly expanded its capability and suitability of application. This expanded formulation, which fully integrates physical models and symbolic analysis, is described. The new formulation of BEAM expands upon previous advanced techniques for analysis of signal data, utilizing mathematical modeling of the system physics, and expert-system reasoning,

  15. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  16. New molecular descriptors based on local properties at the molecular surface and a boiling-point model derived from them.

    PubMed

    Ehresmann, Bernd; de Groot, Marcel J; Alex, Alexander; Clark, Timothy

    2004-01-01

    New molecular descriptors based on statistical descriptions of the local ionization potential, local electron affinity, and the local polarizability at the surface of the molecule are proposed. The significance of these descriptors has been tested by calculating them for the Maybridge database in addition to our set of 26 descriptors reported previously. The new descriptors show little correlation with those already in use. Furthermore, the principal components of the extended set of descriptors for the Maybridge data show that especially the descriptors based on the local electron affinity extend the variance in our set of descriptors, which we have previously shown to be relevant to physical properties. The first nine principal components are shown to be most significant. As an example of the usefulness of the new descriptors, we have set up a QSPR model for boiling points using both the old and new descriptors.

  17. Spatial variability in nutrient transport by HUC8, state, and subbasin based on Mississippi/Atchafalaya River Basin SPARROW models

    USGS Publications Warehouse

    Robertson, Dale M.; Saad, David A.; Schwarz, Gregory E.

    2014-01-01

    Nitrogen (N) and phosphorus (P) loading from the Mississippi/Atchafalaya River Basin (MARB) has been linked to hypoxia in the Gulf of Mexico. With geospatial datasets for 2002, including inputs from wastewater treatment plants (WWTPs), and monitored loads throughout the MARB, SPAtially Referenced Regression On Watershed attributes (SPARROW) watershed models were constructed specifically for the MARB, which reduced simulation errors from previous models. Based on these models, N loads/yields were highest from the central part (centered over Iowa and Indiana) of the MARB (Corn Belt), and the highest P yields were scattered throughout the MARB. Spatial differences in yields from previous studies resulted from different descriptions of the dominant sources (N yields are highest with crop-oriented agriculture and P yields are highest with crop and animal agriculture and major WWTPs) and different descriptions of downstream transport. Delivered loads/yields from the MARB SPARROW models are used to rank subbasins, states, and eight-digit Hydrologic Unit Code basins (HUC8s) by N and P contributions and then rankings are compared with those from other studies. Changes in delivered yields result in an average absolute change of 1.3 (N) and 1.9 (P) places in state ranking and 41 (N) and 69 (P) places in HUC8 ranking from those made with previous national-scale SPARROW models. This information may help managers decide where efforts could have the largest effects (highest ranked areas) and thus reduce hypoxia in the Gulf of Mexico.

  18. Reverse engineering of logic-based differential equation models using a mixed-integer dynamic optimization approach

    PubMed Central

    Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R.

    2015-01-01

    Motivation: Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. Results: In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: julio@iim.csic.es or saezrodriguez@ebi.ac.uk PMID:26002881

  19. Reverse engineering of logic-based differential equation models using a mixed-integer dynamic optimization approach.

    PubMed

    Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R

    2015-09-15

    Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary data are available at Bioinformatics online. julio@iim.csic.es or saezrodriguez@ebi.ac.uk. © The Author 2015. Published by Oxford University Press.

  20. A Decision Support Model and Tool to Assist Financial Decision-Making in Universities

    ERIC Educational Resources Information Center

    Bhayat, Imtiaz; Manuguerra, Maurizio; Baldock, Clive

    2015-01-01

    In this paper, a model and tool is proposed to assist universities and other mission-based organisations to ascertain systematically the optimal portfolio of projects, in any year, meeting the organisations risk tolerances and available funds. The model and tool presented build on previous work on university operations and decision support systems…

  1. Theories for Deep Change in Affect-sensitive Cognitive Machines: A Constructivist Model.

    ERIC Educational Resources Information Center

    Kort, Barry; Reilly, Rob

    2002-01-01

    There is an interplay between emotions and learning, but this interaction is far more complex than previous learning theories have articulated. This article proffers a novel model by which to regard the interplay of emotions upon learning and discusses the larger practical aim of crafting computer-based models that will recognize a learner's…

  2. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  3. Small-signal model for the series resonant converter

    NASA Technical Reports Server (NTRS)

    King, R. J.; Stuart, T. A.

    1985-01-01

    The results of a previous discrete-time model of the series resonant dc-dc converter are reviewed and from these a small signal dynamic model is derived. This model is valid for low frequencies and is based on the modulation of the diode conduction angle for control. The basic converter is modeled separately from its output filter to facilitate the use of these results for design purposes. Experimental results are presented.

  4. Cost Modeling for Space Telescope

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2011-01-01

    Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models.

  5. Modelling Influence and Opinion Evolution in Online Collective Behaviour

    PubMed Central

    Gend, Pascal; Rentfrow, Peter J.; Hendrickx, Julien M.; Blondel, Vincent D.

    2016-01-01

    Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n = 861), it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants’ past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection. PMID:27336834

  6. An experimentally based nonlinear viscoelastic model of joint passive moment.

    PubMed

    Esteki, A; Mansour, J M

    1996-04-01

    Previous investigations have not converged on a generally accepted model of the dissipative part of joint passive moment. To provide a basis for developing a model, a series of measurements were performed to characterize the passive moment at the metacarpophalangeal joint of the index finger. Two measurement procedures were used, one in moment relaxation over a range of fixed joint angles and the other at a series of constant joint velocities. Fung's quasi-linear viscoelastic theory motivated the development of the passive moment model. Using this approach, it was not necessary to make restrictive assumptions regarding the viscoelastic behavior of the passive moment. The generality of the formulation allowed specific functions to be chosen based on experimental data rather than finding coefficients which attempted to fit a preselected model of the data. It was shown that a nonlinear viscoelastic model described the passive stiffness. No significant frictional effects were found. Of particular importance was the nonlinear behavior of the dissipative part of the passive moment which was modeled by joint speed raised to a power less than one. This result could explain the differing findings among previous investigations, and may have important implications for control of limb movement.

  7. Connecting the dots between genes, biochemistry, and disease susceptibility: systems biology modeling in human genetics.

    PubMed

    Moore, Jason H; Boczko, Erik M; Summar, Marshall L

    2005-02-01

    Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two or more DNA sequence variations. We review here this approach and then discuss how it can be used to model biochemical and metabolic data in the context of genetic studies of human disease susceptibility.

  8. Modeling the erythemal surface diffuse irradiance fraction for Badajoz, Spain

    NASA Astrophysics Data System (ADS)

    Sanchez, Guadalupe; Serrano, Antonio; Cancillo, María Luisa

    2017-10-01

    Despite its important role on the human health and numerous biological processes, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and therefore needs to be estimated. This study proposes and compares 10 empirical models to estimate the UVER diffuse fraction. These models are inspired from mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. The best-performing model (RAU3) is based on a model proposed by Ruiz-Arias et al. (2010) and shows values of r2 equal to 0.91 and relative root-mean-square error (rRMSE) equal to 6.1 %. The performance achieved by this entirely empirical model is better than those obtained by previous semi-empirical approaches and therefore needs no additional information from other physically based models. This study expands on previous research to the ultraviolet range and provides reliable empirical models to accurately estimate the UVER diffuse fraction.

  9. A simple non-Markovian computational model of the statistics of soccer leagues: Emergence and scaling effects

    NASA Astrophysics Data System (ADS)

    da Silva, Roberto; Vainstein, Mendeli H.; Lamb, Luis C.; Prado, Sandra D.

    2013-03-01

    We propose a novel probabilistic model that outputs the final standings of a soccer league, based on a simple dynamics that mimics a soccer tournament. In our model, a team is created with a defined potential (ability) which is updated during the tournament according to the results of previous games. The updated potential modifies a team future winning/losing probabilities. We show that this evolutionary game is able to reproduce the statistical properties of final standings of actual editions of the Brazilian tournament (Brasileirão) if the starting potential is the same for all teams. Other leagues such as the Italian (Calcio) and the Spanish (La Liga) tournaments have notoriously non-Gaussian traces and cannot be straightforwardly reproduced by this evolutionary non-Markovian model with simple initial conditions. However, we show that by setting the initial abilities based on data from previous tournaments, our model is able to capture the stylized statistical features of double round robin system (DRRS) tournaments in general. A complete understanding of these phenomena deserves much more attention, but we suggest a simple explanation based on data collected in Brazil: here several teams have been crowned champion in previous editions corroborating that the champion typically emerges from random fluctuations that partly preserve the Gaussian traces during the tournament. On the other hand, in the Italian and Spanish cases, only a few teams in recent history have won their league tournaments. These leagues are based on more robust and hierarchical structures established even before the beginning of the tournament. For the sake of completeness, we also elaborate a totally Gaussian model (which equalizes the winning, drawing, and losing probabilities) and we show that the scores of the Brazilian tournament “Brasileirão” cannot be reproduced. This shows that the evolutionary aspects are not superfluous and play an important role which must be considered in other alternative models. Finally, we analyze the distortions of our model in situations where a large number of teams is considered, showing the existence of a transition from a single to a double peaked histogram of the final classification scores. An interesting scaling is presented for different sized tournaments.

  10. A state-based probabilistic model for tumor respiratory motion prediction

    NASA Astrophysics Data System (ADS)

    Kalet, Alan; Sandison, George; Wu, Huanmei; Schmitz, Ruth

    2010-12-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more general HMM-type predictive models. RMS errors for the time average model approach the theoretical limit of the HMM, and predicted state sequences are well correlated with sequences known to fit the data.

  11. Not just the norm: exemplar-based models also predict face aftereffects.

    PubMed

    Ross, David A; Deroche, Mickael; Palmeri, Thomas J

    2014-02-01

    The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.

  12. Not Just the Norm: Exemplar-Based Models also Predict Face Aftereffects

    PubMed Central

    Ross, David A.; Deroche, Mickael; Palmeri, Thomas J.

    2014-01-01

    The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted towards a face with opposite attributes to the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation. PMID:23690282

  13. Fuel thermal conductivity (FTHCON). Status report. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagrman, D. L.

    1979-02-01

    An improvement of the fuel thermal conductivity subcode is described which is part of the fuel rod behavior modeling task performed at EG and G Idaho, Inc. The original version was published in the Materials Properties (MATPRO) Handbook, Section A-2 (Fuel Thermal Conductivity). The improved version incorporates data which were not included in the previous work and omits some previously used data which are believed to come from cracked specimens. The models for the effect of porosity on thermal conductivity and for the electronic contribution to thermal coductivity have been completely revised in order to place these models on amore » more mechanistic basis. As a result of modeling improvements the standard error of the model with respect to its data base has been significantly reduced.« less

  14. RESIDUAL RISK ASSESSMENTS - FINAL RESIDUAL RISK ASSESSMENT FOR SECONDARY LEAD SMELTERS

    EPA Science Inventory

    This source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation for Secondary Lead Smelters. These assesments utilize existing models and data bases to examin...

  15. Integrating Partial Polarization into a Metal-Ferroelectric-Semiconductor Field Effect Transistor Model

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Ho, Fat Duen

    1999-01-01

    The ferroelectric channel in a Metal-Ferroelectric-Semiconductor Field Effect Transistor (MFSFET) can partially change its polarization when the gate voltage near the polarization threshold voltage. This causes the MFSFET Drain current to change with repeated pulses of the same gate voltage near the polarization threshold voltage. A previously developed model [11, based on the Fermi-Dirac function, assumed that for a given gate voltage and channel polarization, a sin-le Drain current value would be generated. A study has been done to characterize the effects of partial polarization on the Drain current of a MFSFET. These effects have been described mathematically and these equations have been incorporated into a more comprehensive mathematical model of the MFSFET. The model takes into account the hysteresis nature of the MFSFET and the time dependent decay as well as the effects of partial polarization. This model defines the Drain current based on calculating the degree of polarization from previous gate pulses, the present Gate voltage, and the amount of time since the last Gate volta-e pulse.

  16. REHABILITATION FOLLOWING KNEE DISLOCATION WITH LATERAL SIDE INJURY: IMPLEMENTATION OF THE KNEE SYMMETRY MODEL

    PubMed Central

    Jenkins, Walter; Urch, Scott E.; Shelbourne, K. Donald

    2010-01-01

    Rehabilitation following lateral side knee ligament repair or reconstruction has traditionally utilized a conservative approach. An article outlining a new concept in rehabilitation following ACL reconstruction called the Knee Symmetry Model was recently published13. The Knee Symmetry Model can also be applied to rehabilitation of other knee pathologies including a knee dislocation with a lateral side injury. This Clinical Commentary describes the rehabilitation procedures used with patients who underwent surgery to repair lateral side ligaments, based upon the Knee Symmetry Model. These procedures were used previously to rehabilitate a group of patients with lateral side ligament repair as reported by Shelbourne et al10. Outcome data and subjective knee scores for these patients were recorded via the International Knee Documentation Committee (IKDC) guidelines and modified Noyes survey scores and are summarized in this paper, as previously published. Rehabilitation following lateral side knee ligament repair using guidelines based upon the Knee Symmetry Model appears to provide patients with excellent long-term stability, normal ROM and strength, and a high level of function. PMID:21589671

  17. Deep Visual Attention Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  18. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    PubMed

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  19. Effects of a spin-flavour-dependent interaction on light-flavoured baryon helicity amplitudes

    NASA Astrophysics Data System (ADS)

    Ronniger, Michael; Metsch, Bernard Ch.

    2013-01-01

    This paper is a continuation of a previous work about the effects of a phenomenological flavour-dependent force in a relativistically covariant constituent quark model based on the Salpeter equation on the structure of light-flavoured baryon resonances. Here the longitudinal and transverse helicity amplitudes as studied experimentally in the electro-excitation of nucleon- and Δ-resonances are calculated. In particular the amplitudes for the excitation of three- and four-star resonances as calculated in a previous model A are compared to those of the novel model C as well as to existing and partially new experimental data such as, e.g., determined by the CB-ELSA Collaboration. A brief discussion on some improvements to model C is given after the introduction.

  20. An Earth-Based Model of Microgravity Pulmonary Physiology

    NASA Technical Reports Server (NTRS)

    Hirschl, Ronald B.; Bull, Joseph L.; Grothberg, James B.

    2004-01-01

    There are currently only two practical methods of achieving micro G for experimentation: parabolic flight in an aircraft or space flight, both of which have limitations. As a result, there are many important aspects of pulmonary physiology that have not been investigated in micro G. We propose to develop an earth-based animal model of micro G by using liquid ventilation, which will allow us to fill the lungs with perfluorocarbon, and submersing the animal in water such that the density of the lungs is the same as the surrounding environment. By so doing, we will eliminate the effects of gravity on respiration. We will first validate the model by comparing measures of pulmonary physiology, including cardiac output, central venous pressures, lung volumes, and pulmonary mechanics, to previous space flight and parabolic flight measurements. After validating the model, we will investigate the impact of micro G on aspects of lung physiology that have not been previously measured. These will include pulmonary blood flow distribution, ventilation distribution, pulmonary capillary wedge pressure, ventilation-perfusion matching, and pleural pressures and flows. We expect that this earth-based model of micro G will enhance our knowledge and understanding of lung physiology in space which will increase in importance as space flights increase in time and distance.

  1. An attachment-based model of complicated grief including the role of avoidance

    PubMed Central

    Monk, Timothy; Houck, Patricia; Melhem, Nadine; Frank, Ellen; Reynolds, Charles; Sillowash, Russell

    2009-01-01

    Introduction Complicated grief is a prolonged grief disorder with elements of a stress response syndrome. We have previously proposed a biobehavioral model showing the pathway to complicated grief. Avoidance is a component that can be difficult to assess and pivotal to treatment. Therefore we developed an avoidance questionnaire to characterize avoidance among patients with CG. Methods We further explain our complicated grief model and provide results of a study of 128 participants in a treatment study of CG who completed a 15-item Grief-related Avoidance Questionnaire (GRAQ). Results of Avoidance Assessment Mean (SD) GRAQ score was 25. 0 ± 12.5 with a range of 0–60. Cronbach's alpha was 0.87 and test re-test correlation was 0.88. Correlation analyses showed good convergent and discriminant validity. Avoidance of reminders of the loss contributed to functional impairment after controlling for other symptoms of complicated grief. Discussion In this paper we extend our previously described attachment-based biobehavioral model of CG. We envision CG as a stress response syndrome that results from failure to integrate information about death of an attachment figure into an effectively functioning secure base schema and/or to effectively re-engage the exploratory system in a world without the deceased. Avoidance is a key element of the model. PMID:17629727

  2. [The effect of self-reflection on depression mediated by hardiness].

    PubMed

    Nakajima, Miho; Hattori, Yosuke; Tanno, Yoshihiko

    2015-10-01

    Previous studies have shown that two types of private self-consciousness result in opposing effects on depression; one of which is self-rumination, which leads to maladaptive effect, and the other is self-reflection, which leads to an adaptive effect. Although a number of studies have examined the mechanism of the maladaptive effect of self-rumination, only a few studies have examined the mechanism of the adaptive effect of self-reflection. The present study examined the process of how self-reflection affected depression adaptively, Based on the previous findings, we proposed a hypothetical model assuming that hardiness acts as a mediator of self-reflection. To test the validity of the model, structural equation modeling analysis was performed with the cross-sectional data of 155 undergraduate students. The results. suggest that the hypothetical model is valid. According to the present results and previous findings, it is suggested that self-reflection is associated with low levels of depression and mediated by "rich commitment", one component of hardiness.

  3. The HIrisPlex-S system for eye, hair and skin colour prediction from DNA: Introduction and forensic developmental validation.

    PubMed

    Chaitanya, Lakshmi; Breslin, Krystal; Zuñiga, Sofia; Wirken, Laura; Pośpiech, Ewelina; Kukla-Bartoszek, Magdalena; Sijen, Titia; Knijff, Peter de; Liu, Fan; Branicki, Wojciech; Kayser, Manfred; Walsh, Susan

    2018-07-01

    Forensic DNA Phenotyping (FDP), i.e. the prediction of human externally visible traits from DNA, has become a fast growing subfield within forensic genetics due to the intelligence information it can provide from DNA traces. FDP outcomes can help focus police investigations in search of unknown perpetrators, who are generally unidentifiable with standard DNA profiling. Therefore, we previously developed and forensically validated the IrisPlex DNA test system for eye colour prediction and the HIrisPlex system for combined eye and hair colour prediction from DNA traces. Here we introduce and forensically validate the HIrisPlex-S DNA test system (S for skin) for the simultaneous prediction of eye, hair, and skin colour from trace DNA. This FDP system consists of two SNaPshot-based multiplex assays targeting a total of 41 SNPs via a novel multiplex assay for 17 skin colour predictive SNPs and the previous HIrisPlex assay for 24 eye and hair colour predictive SNPs, 19 of which also contribute to skin colour prediction. The HIrisPlex-S system further comprises three statistical prediction models, the previously developed IrisPlex model for eye colour prediction based on 6 SNPs, the previous HIrisPlex model for hair colour prediction based on 22 SNPs, and the recently introduced HIrisPlex-S model for skin colour prediction based on 36 SNPs. In the forensic developmental validation testing, the novel 17-plex assay performed in full agreement with the Scientific Working Group on DNA Analysis Methods (SWGDAM) guidelines, as previously shown for the 24-plex assay. Sensitivity testing of the 17-plex assay revealed complete SNP profiles from as little as 63 pg of input DNA, equalling the previously demonstrated sensitivity threshold of the 24-plex HIrisPlex assay. Testing of simulated forensic casework samples such as blood, semen, saliva stains, of inhibited DNA samples, of low quantity touch (trace) DNA samples, and of artificially degraded DNA samples as well as concordance testing, demonstrated the robustness, efficiency, and forensic suitability of the new 17-plex assay, as previously shown for the 24-plex assay. Finally, we provide an update to the publically available HIrisPlex website https://hirisplex.erasmusmc.nl/, now allowing the estimation of individual probabilities for 3 eye, 4 hair, and 5 skin colour categories from HIrisPlex-S input genotypes. The HIrisPlex-S DNA test represents the first forensically validated tool for skin colour prediction, and reflects the first forensically validated tool for simultaneous eye, hair and skin colour prediction from DNA. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Recurrent personality dimensions in inclusive lexical studies: indications for a big six structure.

    PubMed

    Saucier, Gerard

    2009-10-01

    Previous evidence for both the Big Five and the alternative six-factor model has been drawn from lexical studies with relatively narrow selections of attributes. This study examined factors from previous lexical studies using a wider selection of attributes in 7 languages (Chinese, English, Filipino, Greek, Hebrew, Spanish, and Turkish) and found 6 recurrent factors, each with common conceptual content across most of the studies. The previous narrow-selection-based six-factor model outperformed the Big Five in capturing the content of the 6 recurrent wideband factors. Adjective markers of the 6 recurrent wideband factors showed substantial incremental prediction of important criterion variables over and above the Big Five. Correspondence between wideband 6 and narrowband 6 factors indicate they are variants of a "Big Six" model that is more general across variable-selection procedures and may be more general across languages and populations.

  5. An Extension of SIC Predictions to the Wiener Coactive Model

    PubMed Central

    Houpt, Joseph W.; Townsend, James T.

    2011-01-01

    The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form. PMID:21822333

  6. An Extension of SIC Predictions to the Wiener Coactive Model.

    PubMed

    Houpt, Joseph W; Townsend, James T

    2011-06-01

    The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form.

  7. Cognitive components underpinning the development of model-based learning.

    PubMed

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2017-06-01

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Design and damping force characterization of a new magnetorheological damper activated by permanent magnet flux dispersion

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Hoon; Han, Chulhee; Choi, Seung-Bok

    2018-01-01

    This work proposes a novel type of tunable magnetorheological (MR) damper operated based solely on the location of a permanent magnet incorporated into the piston. To create a larger damping force variation in comparison with the previous model, a different design configuration of the permanent-magnet-based MR (PMMR) damper is introduced to provide magnetic flux dispersion in two magnetic circuits by utilizing two materials with different magnetic reluctance. After discussing the design configuration and some advantages of the newly designed mechanism, the magnetic dispersion principle is analyzed through both the formulated analytical model of the magnetic circuit and the computer simulation based on the magnetic finite element method. Sequentially, the principal design parameters of the damper are determined and fabricated. Then, experiments are conducted to evaluate the variation in damping force depending on the location of the magnet. It is demonstrated that the new design and magnetic dispersion concept are valid showing higher damping force than the previous model. In addition, a curved structure of the two materials is further fabricated and tested to realize the linearity of the damping force variation.

  9. The evolution of human phenotypic plasticity: age and nutritional status at maturity.

    PubMed

    Gage, Timothy B

    2003-08-01

    Several evolutionary optimal models of human plasticity in age and nutritional status at reproductive maturation are proposed and their dynamics examined. These models differ from previously published models because fertility is not assumed to be a function of body size or nutritional status. Further, the models are based on explicitly human demographic patterns, that is, model human life-tables, model human fertility tables, and, a nutrient flow-based model of maternal nutritional status. Infant survival (instead of fertility as in previous models) is assumed to be a function of maternal nutritional status. Two basic models are examined. In the first the cost of reproduction is assumed to be a constant proportion of total nutrient flow. In the second the cost of reproduction is constant for each birth. The constant proportion model predicts a negative slope of age and nutritional status at maturation. The constant cost per birth model predicts a positive slope of age and nutritional status at maturation. Either model can account for the secular decline in menarche observed over the last several centuries in Europe. A search of the growth literature failed to find definitive empirical documentation of human phenotypic plasticity in age and nutritional status at maturation. Most research strategies confound genetics with phenotypic plasticity. The one study that reports secular trends suggests a marginally insignificant, but positive slope. This view tends to support the constant cost per birth model.

  10. Generic Safety Requirements for Developing Safe Insulin Pump Software

    PubMed Central

    Zhang, Yi; Jetley, Raoul; Jones, Paul L; Ray, Arnab

    2011-01-01

    Background The authors previously introduced a highly abstract generic insulin infusion pump (GIIP) model that identified common features and hazards shared by most insulin pumps on the market. The aim of this article is to extend our previous work on the GIIP model by articulating safety requirements that address the identified GIIP hazards. These safety requirements can be validated by manufacturers, and may ultimately serve as a safety reference for insulin pump software. Together, these two publications can serve as a basis for discussing insulin pump safety in the diabetes community. Methods In our previous work, we established a generic insulin pump architecture that abstracts functions common to many insulin pumps currently on the market and near-future pump designs. We then carried out a preliminary hazard analysis based on this architecture that included consultations with many domain experts. Further consultation with domain experts resulted in the safety requirements used in the modeling work presented in this article. Results Generic safety requirements for the GIIP model are presented, as appropriate, in parameterized format to accommodate clinical practices or specific insulin pump criteria important to safe device performance. Conclusions We believe that there is considerable value in having the diabetes, academic, and manufacturing communities consider and discuss these generic safety requirements. We hope that the communities will extend and revise them, make them more representative and comprehensive, experiment with them, and use them as a means for assessing the safety of insulin pump software designs. One potential use of these requirements is to integrate them into model-based engineering (MBE) software development methods. We believe, based on our experiences, that implementing safety requirements using MBE methods holds promise in reducing design/implementation flaws in insulin pump development and evolutionary processes, therefore improving overall safety of insulin pump software. PMID:22226258

  11. Position-based dynamic of a particle system: a configurable algorithm to describe complex behaviour of continuum material starting from swarm robotics

    NASA Astrophysics Data System (ADS)

    dell'Erba, Ramiro

    2018-04-01

    In a previous work, we considered a two-dimensional lattice of particles and calculated its time evolution by using an interaction law based on the spatial position of the particles themselves. The model reproduced the behaviour of deformable bodies both according to the standard Cauchy model and second gradient theory; this success led us to use this method in more complex cases. This work is intended as the natural evolution of the previous one in which we shall consider both energy aspects, coherence with the principle of Saint Venant and we start to manage a more general tool that can be adapted to different physical phenomena, supporting complex effects like lateral contraction, anisotropy or elastoplasticity.

  12. Evaluation of observed blast loading effects on NIF x-ray diagnostic collimators.

    PubMed

    Masters, N D; Fisher, A; Kalantar, D; Prasad, R; Stölken, J S; Wlodarczyk, C

    2014-11-01

    We present the "debris wind" models used to estimate the impulsive load to which x-ray diagnostics and other structures are subject during National Ignition Facility experiments. These models are used as part of the engineering design process. Isotropic models, based on simulations or simplified "expanding shell" models, are augmented by debris wind multipliers to account for directional anisotropy. We present improvements to these multipliers based on measurements of the permanent deflections of diagnostic components: 4× for the polar direction and 2× within the equatorial plane-the latter relaxing the previous heuristic debris wind multiplier.

  13. Measuring Student Course Evaluations: The Use of a Loglinear Model

    ERIC Educational Resources Information Center

    Ting, Ding Hooi; Abella, Mireya Sosa

    2007-01-01

    In this paper, the researchers attempt to incorporate the marketing theory (specifically the service quality model) into the education system. The service quality measurements have been employed to investigate its applicability in the education environment. Most of previous studies employ the regression-based analysis to test the effectiveness of…

  14. Difficulties associated with predicting forage intake by grazing beef cows

    USDA-ARS?s Scientific Manuscript database

    The current National Research Council (NRC) model is based on a single equation that relates dry matter intake (DMI) to metabolic size and net energy density of the diet offered and was a significant improvement over previous models. However, observed DMI by grazing animals can be conceptualized by...

  15. An Observation-base investigation of nudging in WRF for downscaling surface climate information to 12-km Grid Spacing

    EPA Science Inventory

    Previous research has demonstrated the ability to use the Weather Research and Forecast (WRF) model and contemporary dynamical downscaling methods to refine global climate modeling results to a horizontal resolution of 36 km. Environmental managers and urban planners have expre...

  16. Hierarchical Bayesian Models of Subtask Learning

    ERIC Educational Resources Information Center

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  17. Beliefs Held by Associate Degree Nursing Students about Role Models.

    ERIC Educational Resources Information Center

    Bellinger, Kathleen; And Others

    1985-01-01

    Reports on a study of the professional socialization of associate degree nursing (ADN) students. Reviews previous research on the process of nursing socialization. Presents study findings based on responses from 1,877 nursing students in 20 ADN programs, focusing on students' characteristics and ideal and actual role models. (DMM)

  18. Data publication and dissemination of interactive keys under the open access model

    USDA-ARS?s Scientific Manuscript database

    The concepts of publication, citation and dissemination of interactive keys and other online keys are discussed and illustrated by a sample paper published in the present issue (doi: 10.3897/zookeys.21.271). The present model is based on previous experience with several existing examples of publishi...

  19. Establishing an Explanatory Model for Mathematics Identity

    ERIC Educational Resources Information Center

    Cribbs, Jennifer D.; Hazari, Zahra; Sonnert, Gerhard; Sadler, Philip M.

    2015-01-01

    This article empirically tests a previously developed theoretical framework for mathematics identity based on students' beliefs. The study employs data from more than 9,000 college calculus students across the United States to build a robust structural equation model. While it is generally thought that students' beliefs about their own competence…

  20. Problem Solving Under Time-Constraints.

    ERIC Educational Resources Information Center

    Richardson, Michael; Hunt, Earl

    A model of how automated and controlled processing can be mixed in computer simulations of problem solving is proposed. It is based on previous work by Hunt and Lansman (1983), who developed a model of problem solving that could reproduce the data obtained with several attention and performance paradigms, extending production-system notation to…

  1. Phonetics Information Base and Lexicon

    ERIC Educational Resources Information Center

    Moran, Steven Paul

    2012-01-01

    In this dissertation, I investigate the linguistic and technological challenges involved in creating a cross-linguistic data set to undertake phonological typology. I then address the question of whether more sophisticated, knowledge-based approaches to data modeling, coupled with a broad cross-linguistic data set, can extend previous typological…

  2. A combined M5P tree and hazard-based duration model for predicting urban freeway traffic accident durations.

    PubMed

    Lin, Lei; Wang, Qian; Sadek, Adel W

    2016-06-01

    The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a "time-to-an-event" is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a "time-to-event" modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean absolute percentage error (MAPE). Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Gradient retention prediction of acid-base analytes in reversed phase liquid chromatography: a simplified approach for acetonitrile-water mobile phases.

    PubMed

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2014-11-28

    In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Data Warehouse Design from HL7 Clinical Document Architecture Schema.

    PubMed

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests.

  5. Hazards and Possibilities of Optical Breakdown Effects Below the Threshold for Shockwave and Bubble Formation

    DTIC Science & Technology

    2006-07-01

    precision of the determination of Rmax, we established a refined method based on the model of bubble formation described above in section 3.6.1 and the...development can be modeled by hydrodynamic codes based on tabulated equation-of-state data . This has previously demonstrated on ps optical breakdown...per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and

  6. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Mathematical models for predicting human mobility in the context of infectious disease spread: introducing the impedance model.

    PubMed

    Sallah, Kankoé; Giorgi, Roch; Bengtsson, Linus; Lu, Xin; Wetter, Erik; Adrien, Paul; Rebaudet, Stanislas; Piarroux, Renaud; Gaudart, Jean

    2017-11-22

    Mathematical models of human mobility have demonstrated a great potential for infectious disease epidemiology in contexts of data scarcity. While the commonly used gravity model involves parameter tuning and is thus difficult to implement without reference data, the more recent radiation model based on population densities is parameter-free, but biased. In this study we introduce the new impedance model, by analogy with electricity. Previous research has compared models on the basis of a few specific available spatial patterns. In this study, we use a systematic simulation-based approach to assess the performances. Five hundred spatial patterns were generated using various area sizes and location coordinates. Model performances were evaluated based on these patterns. For simulated data, comparison measures were average root mean square error (aRMSE) and bias criteria. Modeling of the 2010 Haiti cholera epidemic with a basic susceptible-infected-recovered (SIR) framework allowed an empirical evaluation through assessing the goodness-of-fit of the observed epidemic curve. The new, parameter-free impedance model outperformed previous models on simulated data according to average aRMSE and bias criteria. The impedance model achieved better performances with heterogeneous population densities and small destination populations. As a proof of concept, the basic compartmental SIR framework was used to confirm the results obtained with the impedance model in predicting the spread of cholera in Haiti in 2010. The proposed new impedance model provides accurate estimations of human mobility, especially when the population distribution is highly heterogeneous. This model can therefore help to achieve more accurate predictions of disease spread in the context of an epidemic.

  8. Teaching Experience: How to Make and Use PowerPoint-Based Interactive Simulations for Undergraduate IR Teaching

    ERIC Educational Resources Information Center

    Meibauer, Gustav; Aagaard Nøhr, Andreas

    2018-01-01

    This article is about designing and implementing PowerPoint-based interactive simulations for use in International Relations (IR) introductory undergraduate classes based on core pedagogical literature, models of human skill acquisition, and previous research on simulations in IR teaching. We argue that simulations can be usefully employed at the…

  9. Methods for estimating population density in data-limited areas: evaluating regression and tree-based models in Peru.

    PubMed

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

  10. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  11. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  12. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  13. Electromigration model for the prediction of lifetime based on the failure unit statistics in aluminum metallization

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Ahn, Byung Tae

    2003-01-01

    A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.

  14. The acute toxicity of major ion salts to Ceriodaphnia dubia: III. Mathematical models for mixture toxicity

    EPA Science Inventory

    Based on previous research on the acute toxicity of major ions (Na+, K+, Ca2+, Mg2+, Cl, SO42, and HCO3/CO32) to C. dubia, two mathematical models were developed for predicting the LC50 for any ion mixture, excluding those dominated by K toxicity. One model addresses a mechanism...

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricafort, Juliet

    A model was developed to determine the forces exerted by several flexor and extensor muscles of the human knee under static conditions. The following muscles were studied: the gastrocnemius, biceps femoris, semitendinosus, semimembranosus, and the set of quadricep muscles. The tibia and fibula were each modeled as rigid bodies; muscles were modeled by their functional lines of action in space. Assumptions based on previous data were used to resolve the indeterminacy.

  16. Effectiveness of Video Modeling Provided by Mothers in Teaching Play Skills to Children with Autism

    ERIC Educational Resources Information Center

    Besler, Fatma; Kurt, Onur

    2016-01-01

    Video modeling is an evidence-based practice that can be used to provide instruction to individuals with autism. Studies show that this instructional practice is effective in teaching many types of skills such as self-help skills, social skills, and academic skills. However, in previous studies, videos used in the video modeling process were…

  17. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    NASA Astrophysics Data System (ADS)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  18. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection.

    PubMed

    Baston, Chiara; Ursino, Mauro

    2015-01-01

    The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments.

  19. The Priority Inversion Problem and Real-Time Symbolic Model Checking

    DTIC Science & Technology

    1993-04-23

    real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties

  20. Site selection model for new metro stations based on land use

    NASA Astrophysics Data System (ADS)

    Zhang, Nan; Chen, Xuewu

    2015-12-01

    Since the construction of metro system generally lags behind the development of urban land use, sites of metro stations should adapt to their surrounding situations, which was rarely discussed by previous research on station layout. This paper proposes a new site selection model to find the best location for a metro station, establishing the indicator system based on land use and combining AHP with entropy weight method to obtain the schemes' ranking. The feasibility and efficiency of this model has been validated by evaluating Nanjing Shengtai Road station and other potential sites.

  1. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.

  2. AptRank: an adaptive PageRank model for protein function prediction on   bi-relational graphs.

    PubMed

    Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael

    2017-06-15

    Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Direct coal liquefaction baseline design and system analysis. Quarterly report, January--March 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-04-01

    The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on two stage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlyingmore » assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.« less

  4. Direct coal liquefaction baseline design and system analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-04-01

    The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on two stage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlyingmore » assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.« less

  5. Cognitive Components Underpinning the Development of Model-Based Learning

    PubMed Central

    Potter, Tracey C.S.; Bryce, Nessa V.; Hartley, Catherine A.

    2016-01-01

    Reinforcement learning theory distinguishes “model-free” learning, which fosters reflexive repetition of previously rewarded actions, from “model-based” learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9–25, we examined whether the abilities to infer sequential regularities in the environment (“statistical learning”), maintain information in an active state (“working memory”) and integrate distant concepts to solve problems (“fluid reasoning”) predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. PMID:27825732

  6. Complete Galilean-Invariant Lattice BGK Models for the Navier-Stokes Equation

    NASA Technical Reports Server (NTRS)

    Qian, Yue-Hong; Zhou, Ye

    1998-01-01

    Galilean invariance has been an important issue in lattice-based hydrodynamics models. Previous models concentrated on the nonlinear advection term. In this paper, we take into account the nonlinear response effect in a systematic way. Using the Chapman-Enskog expansion up to second order, complete Galilean invariant lattice BGK models in one dimension (theta = 3) and two dimensions (theta = 1) for the Navier-Stokes equation have been obtained.

  7. Development and validation of a new population-based simulation model of osteoarthritis in New Zealand.

    PubMed

    Wilson, R; Abbott, J H

    2018-04-01

    To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  8. Modeling the impact of prostate edema on LDR brachytherapy: a Monte Carlo dosimetry study based on a 3D biphasic finite element biomechanical model.

    PubMed

    Mountris, K A; Bert, J; Noailly, J; Aguilera, A Rodriguez; Valeri, A; Pradier, O; Schick, U; Promayon, E; Ballester, M A Gonzalez; Troccaz, J; Visvikis, D

    2017-03-21

    Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model's computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.

  9. Real-time monitoring of high-gravity corn mash fermentation using in situ raman spectroscopy.

    PubMed

    Gray, Steven R; Peretti, Steven W; Lamb, H Henry

    2013-06-01

    In situ Raman spectroscopy was employed for real-time monitoring of simultaneous saccharification and fermentation (SSF) of corn mash by an industrial strain of Saccharomyces cerevisiae. An accurate univariate calibration model for ethanol was developed based on the very strong 883 cm(-1) C-C stretching band. Multivariate partial least squares (PLS) calibration models for total starch, dextrins, maltotriose, maltose, glucose, and ethanol were developed using data from eight batch fermentations and validated using predictions for a separate batch. The starch, ethanol, and dextrins models showed significant prediction improvement when the calibration data were divided into separate high- and low-concentration sets. Collinearity between the ethanol and starch models was avoided by excluding regions containing strong ethanol peaks from the starch model and, conversely, excluding regions containing strong saccharide peaks from the ethanol model. The two-set calibration models for starch (R(2)  = 0.998, percent error = 2.5%) and ethanol (R(2)  = 0.999, percent error = 2.1%) provide more accurate predictions than any previously published spectroscopic models. Glucose, maltose, and maltotriose are modeled to accuracy comparable to previous work on less complex fermentation processes. Our results demonstrate that Raman spectroscopy is capable of real time in situ monitoring of a complex industrial biomass fermentation. To our knowledge, this is the first PLS-based chemometric modeling of corn mash fermentation under typical industrial conditions, and the first Raman-based monitoring of a fermentation process with glucose, oligosaccharides and polysaccharides present. Copyright © 2013 Wiley Periodicals, Inc.

  10. Bayesian population analysis of a washin-washout physiologically based pharmacokinetic model for acetone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moerk, Anna-Karin, E-mail: anna-karin.mork@ki.s; Jonsson, Fredrik; Pharsight, a Certara company, St. Louis, MO

    2009-11-01

    The aim of this study was to derive improved estimates of population variability and uncertainty of physiologically based pharmacokinetic (PBPK) model parameters, especially of those related to the washin-washout behavior of polar volatile substances. This was done by optimizing a previously published washin-washout PBPK model for acetone in a Bayesian framework using Markov chain Monte Carlo simulation. The sensitivity of the model parameters was investigated by creating four different prior sets, where the uncertainty surrounding the population variability of the physiological model parameters was given values corresponding to coefficients of variation of 1%, 25%, 50%, and 100%, respectively. The PBPKmore » model was calibrated to toxicokinetic data from 2 previous studies where 18 volunteers were exposed to 250-550 ppm of acetone at various levels of workload. The updated PBPK model provided a good description of the concentrations in arterial, venous, and exhaled air. The precision of most of the model parameter estimates was improved. New information was particularly gained on the population distribution of the parameters governing the washin-washout effect. The results presented herein provide a good starting point to estimate the target dose of acetone in the working and general populations for risk assessment purposes.« less

  11. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.

    PubMed

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors.

  12. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  13. Crop weather models of corn and soybeans for Agrophysical Units (APU's) in Iowa using monthly meteorological predictors

    NASA Technical Reports Server (NTRS)

    Leduc, S. (Principal Investigator)

    1982-01-01

    Models based on multiple regression were developed to estimate corn and soybean yield from weather data for agrophysical units (APU) in Iowa. The predictor variables are derived from monthly average temperature and monthly total precipitation data at meteorological stations in the cooperative network. The models are similar in form to the previous models developed for crop reporting districts (CRD). The trends and derived variables were the same and the approach to select the significant predictors was similar to that used in developing the CRD models. The APU's were selected to be more homogeneous with respect crop to production than the CRDs. The APU models are quite similar to the CRD models, similar explained variation and number of predictor variables. The APU models are to be independently evaluated and compared to the previously evaluated CRD models. That comparison should indicate the preferred model area for this application, i.e., APU or CRD.

  14. A six-parameter Iwan model and its application

    NASA Astrophysics Data System (ADS)

    Li, Yikun; Hao, Zhiming

    2016-02-01

    Iwan model is a practical tool to describe the constitutive behaviors of joints. In this paper, a six-parameter Iwan model based on a truncated power-law distribution with two Dirac delta functions is proposed, which gives a more comprehensive description of joints than the previous Iwan models. Its analytical expressions including backbone curve, unloading curves and energy dissipation are deduced. Parameter identification procedures and the discretization method are also provided. A model application based on Segalman et al.'s experiment works with bolted joints is carried out. Simulation effects of different numbers of Jenkins elements are discussed. The results indicate that the six-parameter Iwan model can be used to accurately reproduce the experimental phenomena of joints.

  15. An Examination of Learning Profiles in Physical Education

    ERIC Educational Resources Information Center

    Shen, Bo; Chen, Ang

    2007-01-01

    Using the model of domain learning as a theoretical framework, the study was designed to examine the extent to which learners' initial learning profiles based on previously acquired knowledge, learning strategy application, and interest-based motivation were distinctive in learning softball. Participants were 177 sixth-graders from three middle…

  16. Establishing Verbal Repertoires in Children with Autism Using Function-Based Video Modeling

    ERIC Educational Resources Information Center

    Plavnick, Joshua B.; Ferreri, Summer J.

    2011-01-01

    Previous research suggests that language-training procedures for children with autism might be enhanced following an assessment of conditions that evoke emerging verbal behavior. The present investigation examined a methodology to teach recognizable mands based on environmental variables known to evoke participants' idiosyncratic communicative…

  17. Learning from Avatars: Learning Assistants Practice Physics Pedagogy in a Classroom Simulator

    ERIC Educational Resources Information Center

    Chini, Jacquelyn J.; Straub, Carrie L.; Thomas, Kevin H.

    2016-01-01

    Undergraduate students are increasingly being used to support course transformations that incorporate research-based instructional strategies. While such students are typically selected based on strong content knowledge and possible interest in teaching, they often do not have previous pedagogical training. The current training models make use of…

  18. The nearly neutral and selection theories of molecular evolution under the fisher geometrical framework: substitution rate, population size, and complexity.

    PubMed

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A

    2012-06-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population's phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models.

  19. The Nearly Neutral and Selection Theories of Molecular Evolution Under the Fisher Geometrical Framework: Substitution Rate, Population Size, and Complexity

    PubMed Central

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A.

    2012-01-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population’s phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models. PMID:22426879

  20. On the exact solvability of the anisotropic central spin model: An operator approach

    NASA Astrophysics Data System (ADS)

    Wu, Ning

    2018-07-01

    Using an operator approach based on a commutator scheme that has been previously applied to Richardson's reduced BCS model and the inhomogeneous Dicke model, we obtain general exact solvability requirements for an anisotropic central spin model with XXZ-type hyperfine coupling between the central spin and the spin bath, without any prior knowledge of integrability of the model. We outline basic steps of the usage of the operators approach, and pedagogically summarize them into two Lemmas and two Constraints. Through a step-by-step construction of the eigen-problem, we show that the condition gj‧2 - gj2 = c naturally arises for the model to be exactly solvable, where c is a constant independent of the bath-spin index j, and {gj } and { gj‧ } are the longitudinal and transverse hyperfine interactions, respectively. The obtained conditions and the resulting Bethe ansatz equations are consistent with that in previous literature.

  1. Cluster based architecture and network maintenance protocol for medical priority aware cognitive radio based hospital.

    PubMed

    Al Mamoon, Ishtiak; Muzahidul Islam, A K M; Baharun, Sabariah; Ahmed, Ashir; Komaki, Shozo

    2016-08-01

    Due to the rapid growth of wireless medical devices in near future, wireless healthcare services may face some inescapable issue such as medical spectrum scarcity, electromagnetic interference (EMI), bandwidth constraint, security and finally medical data communication model. To mitigate these issues, cognitive radio (CR) or opportunistic radio network enabled wireless technology is suitable for the upcoming wireless healthcare system. The up-to-date research on CR based healthcare has exposed some developments on EMI and spectrum problems. However, the investigation recommendation on system design and network model for CR enabled hospital is rare. Thus, this research designs a hierarchy based hybrid network architecture and network maintenance protocols for previously proposed CR hospital system, known as CogMed. In the previous study, the detail architecture of CogMed and its maintenance protocols were not present. The proposed architecture includes clustering concepts for cognitive base stations and non-medical devices. Two cluster head (CH selector equations are formulated based on priority of location, device, mobility rate of devices and number of accessible channels. In order to maintain the integrity of the proposed network model, node joining and node leaving protocols are also proposed. Finally, the simulation results show that the proposed network maintenance time is very low for emergency medical devices (average maintenance period 9.5 ms) and the re-clustering effects for different mobility enabled non-medical devices are also balanced.

  2. Temporal Variability of Cumulative Risk Assessment on Phthalates in Chinese Pregnant Women: Repeated Measurement Analysis.

    PubMed

    Gao, Hui; Zhu, Bei-Bei; Tao, Xing-Yong; Zhu, Yuan-Duo; Tao, Xu-Guang; Tao, Fang-Biao

    2018-06-05

    The assessment of the combined effects of multiple phthalate exposures at low levels is a newly developed concept to avoid underestimating their actual cumulative health risk. A previous study included 3455 Chinese pregnant women. Each woman provided up to three urine samples (in total 9529). This previous study characterized the concentrations of phthalate metabolites. In the present study, the data from 9529 samples was reanalyzed to examine the cumulative risk assessment (CRA) with two models: (1) the creatinine-based and (2) the volume-based. Hazard index (HI) values for three phthalates, dibutyl phthalate, butyl benzyl phthalate, and di(2-ethylhexyl) phthalate, in the first, second, and third trimesters of pregnancy, were calculated, respectively. In creatinine-based model, 3.43%, 14.63%, and 17.28% of women showed HI based on the European Food Safety Authority tolerable daily intake exceeding 1 in the first, second, and third trimester of pregnancy, respectively. The intraclass correlation coefficient of HI was 0.49 (95% confidence interval: 0.46-0.53). Spearman correlations between HI of the creatinine model and ∑androgen disruptor (a developed potency weighted approach) ranged from 0.824 to 0.984. In summary, this study suggested a considerable risk of cumulative exposure to phthalates during the whole gestation in Chinese pregnant women. In addition, moderate temporal reproducibility indicated that single HI, estimated by the phthalate concentration in single spot of urine, seemed representative to describe the throughout pregnancy CRA. Finally, strong correlation between HI of the creatinine model and ∑androgen disruptor revealed that the creatinine-based model was more appropriate to evaluate the CRA.

  3. Nowcasting Intraseasonal Recreational Fishing Harvest with Internet Search Volume

    PubMed Central

    Carter, David W.; Crosson, Scott; Liese, Christopher

    2015-01-01

    Estimates of recreational fishing harvest are often unavailable until after a fishing season has ended. This lag in information complicates efforts to stay within the quota. The simplest way to monitor quota within the season is to use harvest information from the previous year. This works well when fishery conditions are stable, but is inaccurate when fishery conditions are changing. We develop regression-based models to “nowcast” intraseasonal recreational fishing harvest in the presence of changing fishery conditions. Our basic model accounts for seasonality, changes in the fishing season, and important events in the fishery. Our extended model uses Google Trends data on the internet search volume relevant to the fishery of interest. We demonstrate the model with the Gulf of Mexico red snapper fishery where the recreational sector has exceeded the quota nearly every year since 2007. Our results confirm that data for the previous year works well to predict intraseasonal harvest for a year (2012) where fishery conditions are consistent with historic patterns. However, for a year (2013) of unprecedented harvest and management activity our regression model using search volume for the term “red snapper season” generates intraseasonal nowcasts that are 27% more accurate than the basic model without the internet search information and 29% more accurate than the prediction based on the previous year. Reliable nowcasts of intraseasonal harvest could make in-season (or in-year) management feasible and increase the likelihood of staying within quota. Our nowcasting approach using internet search volume might have the potential to improve quota management in other fisheries where conditions change year-to-year. PMID:26348645

  4. A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.

    PubMed

    Grando, M Adela; Glasspool, David; Fox, John

    2012-01-01

    To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. A Physics-Inspired Mechanistic Model of Migratory Movement Patterns in Birds.

    PubMed

    Revell, Christopher; Somveille, Marius

    2017-08-29

    In this paper, we introduce a mechanistic model of migratory movement patterns in birds, inspired by ideas and methods from physics. Previous studies have shed light on the factors influencing bird migration but have mainly relied on statistical correlative analysis of tracking data. Our novel method offers a bottom up explanation of population-level migratory movement patterns. It differs from previous mechanistic models of animal migration and enables predictions of pathways and destinations from a given starting location. We define an environmental potential landscape from environmental data and simulate bird movement within this landscape based on simple decision rules drawn from statistical mechanics. We explore the capacity of the model by qualitatively comparing simulation results to the non-breeding migration patterns of a seabird species, the Black-browed Albatross (Thalassarche melanophris). This minimal, two-parameter model was able to capture remarkably well the previously documented migration patterns of the Black-browed Albatross, with the best combination of parameter values conserved across multiple geographically separate populations. Our physics-inspired mechanistic model could be applied to other bird and highly-mobile species, improving our understanding of the relative importance of various factors driving migration and making predictions that could be useful for conservation.

  6. A Parametric Approach to Numerical Modeling of TKR Contact Forces

    PubMed Central

    Lundberg, Hannah J.; Foucher, Kharma C.; Wimmer, Markus A.

    2009-01-01

    In vivo knee contact forces are difficult to determine using numerical methods because there are more unknown forces than equilibrium equations available. We developed parametric methods for computing contact forces across the knee joint during the stance phase of level walking. Three-dimensional contact forces were calculated at two points of contact between the tibia and the femur, one on the lateral aspect of the tibial plateau, and one on the medial side. Muscle activations were parametrically varied over their physiologic range resulting in a solution space of contact forces. The obtained solution space was reasonably small and the resulting force pattern compared well to a previous model from the literature for kinematics and external kinetics from the same patient. Peak forces of the parametric model and the previous model were similar for the first half of the stance phase, but differed for the second half. The previous model did not take into account the transverse external moment about the knee and could not calculate muscle activation levels. Ultimately, the parametric model will result in more accurate contact force inputs for total knee simulators, as current inputs are not generally based on kinematics and kinetics inputs from TKR patients. PMID:19155015

  7. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    PubMed Central

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  8. Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model

    NASA Astrophysics Data System (ADS)

    Vila, J.; Fernández-Sáez, J.; Zaera, R.

    2018-04-01

    In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.

  9. Toward a Better Quantitative Understanding of Polar Stratospheric Ozone Loss

    NASA Technical Reports Server (NTRS)

    Frieler, K.; Rex, M.; Salawitch, R. J.; Canty, T.; Streibel, M.; Stimpfle, R. M.; Pfeilsticker, K.; Dorf, M.; Weisenstein, D. K.; Godin-Beekmann, S.

    2006-01-01

    Previous studies have shown that observed large O3 loss rates in cold Arctic Januaries cannot be explained with current understanding of the loss processes, recommended reaction kinetics, and standard assumptions about total stratospheric chlorine and bromine. Studies based on data collected during recent field campaigns suggest faster rates of photolysis and thermal decomposition of ClOOCl and higher stratospheric bromine concentrations than previously assumed. We show that a model accounting for these kinetic changes and higher levels of BrO can largely resolve the January Arctic O3 loss problem and closely reproduces observed Arctic O3 loss while being consistent with observed levels of ClO and ClOOCl. The model also suggests that bromine catalyzed O3 loss is more important relative to chlorine catalyzed loss than previously thought.

  10. Modeling the impact of prostate edema on LDR brachytherapy: a Monte Carlo dosimetry study based on a 3D biphasic finite element biomechanical model

    NASA Astrophysics Data System (ADS)

    Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.

    2017-03-01

    Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.

  11. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    PubMed

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Enhancing an Instructional Design Model for Virtual Reality-Based Learning

    ERIC Educational Resources Information Center

    Chen, Chwen Jen; Teh, Chee Siong

    2013-01-01

    In order to effectively utilize the capabilities of virtual reality (VR) in supporting the desired learning outcomes, careful consideration in the design of instruction for VR learning is crucial. In line with this concern, previous work proposed an instructional design model that prescribes instructional methods to guide the design of VR-based…

  13. The Two-Semester Thesis Model: Emphasizing Research in Undergraduate Technical Communication Curricula

    ERIC Educational Resources Information Center

    Ford, Julie Dyke; Bracken, Jennifer L.; Wilson, Gregory D.

    2009-01-01

    This article addresses previous arguments that call for increased emphasis on research in technical communication programs. Focusing on the value of scholarly-based research at the undergraduate level, we present New Mexico Tech's thesis model as an example of helping students develop familiarity with research skills and methods. This two-semester…

  14. A Cash Management Model.

    ERIC Educational Resources Information Center

    Boyles, William W.

    1975-01-01

    In 1973, Ronald G. Lykins presented a model for cash management and analysed its benefits for Ohio University. This paper attempts to expand on the previous method by providing answers to questions raised by the Lykins methods by a series of simple algebraic formulas. Both methods are based on two premises: (1) all cash over which the business…

  15. An Instrument for Analyzing Arguments Produced in Modeling-Based Chemistry Lessons

    ERIC Educational Resources Information Center

    Mendonça, Paula Cristina Cardoso; Justi, Rosária

    2014-01-01

    Previous research on argumentation in science education has focused on the understanding of relationships between modeling and argumentation (an important topic that only recently has been addressed in few empirical studies), and the methodological difficulties related to the analysis of arguments produced in classrooms. Our study is related to…

  16. Using chaotic artificial neural networks to model memory in the brain

    NASA Astrophysics Data System (ADS)

    Aram, Zainab; Jafari, Sajad; Ma, Jun; Sprott, Julien C.; Zendehrouh, Sareh; Pham, Viet-Thanh

    2017-03-01

    In the current study, a novel model for human memory is proposed based on the chaotic dynamics of artificial neural networks. This new model explains a biological fact about memory which is not yet explained by any other model: There are theories that the brain normally works in a chaotic mode, while during attention it shows ordered behavior. This model uses the periodic windows observed in a previously proposed model for the brain to store and then recollect the information.

  17. Interface tension in the improved Blume-Capel model

    NASA Astrophysics Data System (ADS)

    Hasenbusch, Martin

    2017-09-01

    We study interfaces with periodic boundary conditions in the low-temperature phase of the improved Blume-Capel model on the simple cubic lattice. The interface free energy is defined by the difference of the free energy of a system with antiperiodic boundary conditions in one of the directions and that of a system with periodic boundary conditions in all directions. It is obtained by integration of differences of the corresponding internal energies over the inverse temperature. These differences can be computed efficiently by using a variance reduced estimator that is based on the exchange cluster algorithm. The interface tension is obtained from the interface free energy by using predictions based on effective interface models. By using our numerical results for the interface tension σ and the correlation length ξ obtained in previous work, we determine the universal amplitude ratios R2 nd ,+=σ0f2nd ,+ 2=0.3863 (6 ) , R2 nd ,-=σ0f2nd ,- 2=0.1028 (1 ) , and Rexp ,-=σ0fexp,- 2=0.1077 (3 ) . Our results are consistent with those obtained previously for the three-dimensional Ising model, confirming the universality hypothesis.

  18. Application of Support Vector Machine to Forex Monitoring

    NASA Astrophysics Data System (ADS)

    Kamruzzaman, Joarder; Sarker, Ruhul A.

    Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.

  19. Implementation and Validation of a Laminar-to-Turbulent Transition Model in the Wind-US Code

    NASA Technical Reports Server (NTRS)

    Denissen, Nicholas A.; Yoder, Dennis A.; Georgiadis, Nicholas J.

    2008-01-01

    A bypass transition model has been implemented in the Wind-US Reynolds Averaged Navier-Stokes (RANS) solver. The model is based on the Shear Stress Transport (SST) turbulence model and was built starting from a previous SST-based transition model. Several modifications were made to enable (1) consistent solutions regardless of flow field initialization procedure and (2) fully turbulent flow beyond the transition region. This model is intended for flows where bypass transition, in which the transition process is dominated by large freestream disturbances, is the key transition mechanism as opposed to transition dictated by modal growth. Validation of the new transition model is performed for flows ranging from incompressible to hypersonic conditions.

  20. A closed-loop artificial pancreas using a proportional integral derivative with double phase lead controller based on a new nonlinear model of glucose metabolism.

    PubMed

    Abbes, Ilham Ben; Richard, Pierre-Yves; Lefebvre, Marie-Anne; Guilhem, Isabelle; Poirier, Jean-Yves

    2013-05-01

    Most closed-loop insulin delivery systems rely on model-based controllers to control the blood glucose (BG) level. Simple models of glucose metabolism, which allow easy design of the control law, are limited in their parametric identification from raw data. New control models and controllers issued from them are needed. A proportional integral derivative with double phase lead controller was proposed. Its design was based on a linearization of a new nonlinear control model of the glucose-insulin system in type 1 diabetes mellitus (T1DM) patients validated with the University of Virginia/Padova T1DM metabolic simulator. A 36 h scenario, including six unannounced meals, was tested in nine virtual adults. A previous trial database has been used to compare the performance of our controller with their previous results. The scenario was repeated 25 times for each adult in order to take continuous glucose monitoring noise into account. The primary outcome was the time BG levels were in target (70-180 mg/dl). Blood glucose values were in the target range for 77% of the time and below 50 mg/dl and above 250 mg/dl for 0.8% and 0.3% of the time, respectively. The low blood glucose index and high blood glucose index were 1.65 and 3.33, respectively. The linear controller presented, based on the linearization of a new easily identifiable nonlinear model, achieves good glucose control with low exposure to hypoglycemia and hyperglycemia. © 2013 Diabetes Technology Society.

  1. Scaling in situ cosmogenic nuclide production rates using analytical approximations to atmospheric cosmic-ray fluxes

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.

    2014-01-01

    Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.

  2. Reacting Chemistry Based Burn Model for Explosive Hydrocodes

    NASA Astrophysics Data System (ADS)

    Schwaab, Matthew; Greendyke, Robert; Steward, Bryan

    2017-06-01

    Currently, in hydrocodes designed to simulate explosive material undergoing shock-induced ignition, the state of the art is to use one of numerous reaction burn rate models. These burn models are designed to estimate the bulk chemical reaction rate. Unfortunately, these models are largely based on empirical data and must be recalibrated for every new material being simulated. We propose that the use of an equilibrium Arrhenius rate reacting chemistry model in place of these empirically derived burn models will improve the accuracy for these computational codes. Such models have been successfully used in codes simulating the flow physics around hypersonic vehicles. A reacting chemistry model of this form was developed for the cyclic nitramine RDX by the Naval Research Laboratory (NRL). Initial implementation of this chemistry based burn model has been conducted on the Air Force Research Laboratory's MPEXS multi-phase continuum hydrocode. In its present form, the burn rate is based on the destruction rate of RDX from NRL's chemistry model. Early results using the chemistry based burn model show promise in capturing deflagration to detonation features more accurately in continuum hydrocodes than previously achieved using empirically derived burn models.

  3. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  4. Analytical halo model of galactic conformity

    NASA Astrophysics Data System (ADS)

    Pahwa, Isha; Paranjape, Aseem

    2017-09-01

    We present a fully analytical halo model of colour-dependent clustering that incorporates the effects of galactic conformity in a halo occupation distribution framework. The model, based on our previous numerical work, describes conformity through a correlation between the colour of a galaxy and the concentration of its parent halo, leading to a correlation between central and satellite galaxy colours at fixed halo mass. The strength of the correlation is set by a tunable 'group quenching efficiency', and the model can separately describe group-level correlations between galaxy colour (1-halo conformity) and large-scale correlations induced by assembly bias (2-halo conformity). We validate our analytical results using clustering measurements in mock galaxy catalogues, finding that the model is accurate at the 10-20 per cent level for a wide range of luminosities and length-scales. We apply the formalism to interpret the colour-dependent clustering of galaxies in the Sloan Digital Sky Survey (SDSS). We find good overall agreement between the data and a model that has 1-halo conformity at a level consistent with previous results based on an SDSS group catalogue, although the clustering data require satellites to be redder than suggested by the group catalogue. Within our modelling uncertainties, however, we do not find strong evidence of 2-halo conformity driven by assembly bias in SDSS clustering.

  5. Prediction of frozen food properties during freezing using product composition.

    PubMed

    Boonsupthip, W; Heldman, D R

    2007-06-01

    Frozen water fraction (FWF), as a function of temperature, is an important parameter for use in the design of food freezing processes. An FWF-prediction model, based on concentrations and molecular weights of specific product components, has been developed. Published food composition data were used to determine the identity and composition of key components. The model proposed in this investigation had been verified using published experimental FWF data and initial freezing temperature data, and by comparison to outputs from previously published models. It was found that specific food components with significant influence on freezing temperature depression of food products included low molecular weight water-soluble compounds with molality of 50 micromol per 100 g food or higher. Based on an analysis of 200 high-moisture food products, nearly 45% of the experimental initial freezing temperature data were within an absolute difference (AD) of +/- 0.15 degrees C and standard error (SE) of +/- 0.65 degrees C when compared to values predicted by the proposed model. The predicted relationship between temperature and FWF for all analyzed food products provided close agreements with experimental data (+/- 0.06 SE). The proposed model provided similar prediction capability for high- and intermediate-moisture food products. In addition, the proposed model provided statistically better prediction of initial freezing temperature and FWF than previous published models.

  6. [Consumer's psychological processes of hoarding and avoidant purchasing after the Tohoku earthquake].

    PubMed

    Ohtomo, Shoji; Hirose, Yukio

    2014-02-01

    This study examined psychological processes of consumers that had determined hoarding and avoidant purchasing behaviors after the Tohoku earthquake within a dual-process model. The model hypothesized that both intentional motivation based on reflective decision and reactive motivation based on non-reflective decision predicted the behaviors. This study assumed that attitude, subjective norm and descriptive norm in relation to hoarding and avoidant purchasing were determinants of motivations. Residents in the Tokyo metropolitan area (n = 667) completed internet longitudinal surveys at three times (April, June, and November, 2011). The results indicated that intentional and reactive motivation determined avoidant purchasing behaviors in June; only intentional motivation determined the behaviors in November. Attitude was a main determinant of the motivations each time. Moreover, previous behaviors predicted future behaviors. In conclusion, purchasing behaviors were intentional rather than reactive behaviors. Furthermore, attitude and previous behaviors were important determinants in the dual-process model. Attitude and behaviors formed in April continued to strengthen the subsequent decisions of purchasing behavior.

  7. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  8. Exploring tree species colonization potentials using a spatially explicit simulation model: implications for four oaks under climate change

    Treesearch

    Anantha M. Prasad; Judith D. Gardiner; Louis R. Iverson; Stephen N. Matthews; Matthew Peters

    2013-01-01

    Climate change impacts tree species differentially by exerting unique pressures and altering their suitable habitats. We previously predicted these changes in suitable habitat for current and future climates using a species habitat model (DISTRIB) in the eastern United States. Based on the accuracy of the model, the species assemblages should eventually reflect the new...

  9. Roles of University Support for International Students in the United States: Analysis of a Systematic Model of University Identification, University Support, and Psychological Well-Being

    ERIC Educational Resources Information Center

    Cho, Jaehee; Yu, Hongsik

    2015-01-01

    Unlike previous research on international students' social support, this current study applied the concept of organizational support to university contexts, examining the effects of university support. Mainly based on the social identity/self-categorization stress model, this study developed and tested a path model composed of four key…

  10. iTree-Hydro: Snow hydrology update for the urban forest hydrology model

    Treesearch

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2011-01-01

    This article presents snow hydrology updates made to iTree-Hydro, previously called the Urban Forest Effects—Hydrology model. iTree-Hydro Version 1 was a warm climate model developed by the USDA Forest Service to provide a process-based planning tool with robust water quantity and quality predictions given data limitations common to most urban areas. Cold climate...

  11. Computational State Space Models for Activity and Intention Recognition. A Feasibility Study

    PubMed Central

    Krüger, Frank; Nyolt, Martin; Yordanova, Kristina; Hein, Albert; Kirste, Thomas

    2014-01-01

    Background Computational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity. Methods A typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance. Results The symbolic domain model was found to have more than states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure. Conclusions Our results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance. PMID:25372138

  12. Structure of the SnO2(110 ) -(4 ×1 ) Surface

    NASA Astrophysics Data System (ADS)

    Merte, Lindsay R.; Jørgensen, Mathias S.; Pussi, Katariina; Gustafson, Johan; Shipilin, Mikhail; Schaefer, Andreas; Zhang, Chu; Rawle, Jonathan; Nicklin, Chris; Thornton, Geoff; Lindsay, Robert; Hammer, Bjørk; Lundgren, Edvin

    2017-09-01

    Using surface x-ray diffraction (SXRD), quantitative low-energy electron diffraction (LEED), and density-functional theory (DFT) calculations, we have determined the structure of the (4 ×1 ) reconstruction formed by sputtering and annealing of the SnO2(110 ) surface. We find that the reconstruction consists of an ordered arrangement of Sn3O3 clusters bound atop the bulk-terminated SnO2(110 ) surface. The model was found by application of a DFT-based evolutionary algorithm with surface compositions based on SXRD, and shows excellent agreement with LEED and with previously published scanning tunneling microscopy measurements. The model proposed previously consisting of in-plane oxygen vacancies is thus shown to be incorrect, and our result suggests instead that Sn(II) species in interstitial positions are the more relevant features of reduced SnO2(110 ) surfaces.

  13. Development of an Agent-Based Model to Investigate the Impact of HIV Self-Testing Programs on Men Who Have Sex With Men in Atlanta and Seattle.

    PubMed

    Luo, Wei; Katz, David A; Hamilton, Deven T; McKenney, Jennie; Jenness, Samuel M; Goodreau, Steven M; Stekler, Joanne D; Rosenberg, Eli S; Sullivan, Patrick S; Cassels, Susan

    2018-06-29

    In the United States HIV epidemic, men who have sex with men (MSM) remain the most profoundly affected group. Prevention science is increasingly being organized around HIV testing as a launch point into an HIV prevention continuum for MSM who are not living with HIV and into an HIV care continuum for MSM who are living with HIV. An increasing HIV testing frequency among MSM might decrease future HIV infections by linking men who are living with HIV to antiretroviral care, resulting in viral suppression. Distributing HIV self-test (HIVST) kits is a strategy aimed at increasing HIV testing. Our previous modeling work suggests that the impact of HIV self-tests on transmission dynamics will depend not only on the frequency of tests and testers' behaviors but also on the epidemiological and testing characteristics of the population. The objective of our study was to develop an agent-based model to inform public health strategies for promoting safe and effective HIV self-tests to decrease the HIV incidence among MSM in Atlanta, GA, and Seattle, WA, cities representing profoundly different epidemiological settings. We adapted and extended a network- and agent-based stochastic simulation model of HIV transmission dynamics that was developed and parameterized to investigate racial disparities in HIV prevalence among MSM in Atlanta. The extension comprised several activities: adding a new set of model parameters for Seattle MSM; adding new parameters for tester types (ie, regular, risk-based, opportunistic-only, or never testers); adding parameters for simplified pre-exposure prophylaxis uptake following negative results for HIV tests; and developing a conceptual framework for the ways in which the provision of HIV self-tests might change testing behaviors. We derived city-specific parameters from previous cohort and cross-sectional studies on MSM in Atlanta and Seattle. Each simulated population comprised 10,000 MSM and targeted HIV prevalences are equivalent to 28% and 11% in Atlanta and Seattle, respectively. Previous studies provided sufficient data to estimate the model parameters representing nuanced HIV testing patterns and HIV self-test distribution. We calibrated the models to simulate the epidemics representing Atlanta and Seattle, including matching the expected stable HIV prevalence. The revised model facilitated the estimation of changes in 10-year HIV incidence based on counterfactual scenarios of HIV self-test distribution strategies and their impact on testing behaviors. We demonstrated that the extension of an existing agent-based HIV transmission model was sufficient to simulate the HIV epidemics among MSM in Atlanta and Seattle, to accommodate a more nuanced depiction of HIV testing behaviors than previous models, and to serve as a platform to investigate how HIV self-tests might impact testing and HIV transmission patterns among MSM in Atlanta and Seattle. In our future studies, we will use the model to test how different HIV self-test distribution strategies might affect HIV incidence among MSM. ©Wei Luo, David A Katz, Deven T Hamilton, Jennie McKenney, Samuel M Jenness, Steven M Goodreau, Joanne D Stekler, Eli S Rosenberg, Patrick S Sullivan, Susan Cassels. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 29.06.2018.

  14. Simulation of Blast Loading on an Ultrastructurally-based Computational Model of the Ocular Lens

    DTIC Science & Technology

    2016-12-01

    organelles. Additionally, the cell membranes demonstrated the classic ball-and-socket loops . For the SEM images, they were placed in two fixatives and mounted...considered (fibrous network and matrix), both components are modelled using a hyper - elastic framework, and the resulting constitutive model is embedded in a...within the framework of hyper - elasticity). Full details on the linearization procedures that were adopted in these previous models or the convergence

  15. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  16. Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models

    PubMed Central

    2017-01-01

    We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927

  17. Predicting recovery criteria for threatened and endangered plant species on the basis of past abundances and biological traits.

    PubMed

    Neel, Maile C; Che-Castaldo, Judy P

    2013-04-01

    Recovery plans for species listed under the U.S. Endangered Species Act are required to specify measurable criteria that can be used to determine when the species can be delisted. For the 642 listed endangered and threatened plant species that have recovery plans, we applied recursive partitioning methods to test whether the number of individuals or populations required for delisting can be predicted on the basis of distributional and biological traits, previous abundance at multiple time steps, or a combination of traits and previous abundances. We also tested listing status (threatened or endangered) and the year the recovery plan was written as predictors of recovery criteria. We analyzed separately recovery criteria that were stated as number of populations and as number of individuals (population-based and individual-based criteria, respectively). Previous abundances alone were relatively good predictors of population-based recovery criteria. Fewer populations, but a greater proportion of historically known populations, were required to delist species that had few populations at listing compared with species that had more populations at listing. Previous abundances were also good predictors of individual-based delisting criteria when models included both abundances and traits. The physiographic division in which the species occur was also a good predictor of individual-based criteria. Our results suggest managers are relying on previous abundances and patterns of decline as guidelines for setting recovery criteria. This may be justifiable in that previous abundances inform managers of the effects of both intrinsic traits and extrinsic threats that interact and determine extinction risk. © 2013 Society for Conservation Biology.

  18. Optimal quality control of bakers' yeast fed-batch culture using population dynamics.

    PubMed

    Dairaku, K; Izumoto, E; Morikawa, H; Shioya, S; Takamatsu, T

    1982-12-01

    An optimal quality control policy for the overall specific growth rate of bakers' yeast, which maximizes the fermentative activity in the making of bread, was obtained by direct searching based on the mathematical model proposed previously. The mathematical model had described the age distribution of bakers' yeast which had an essential relationship to the ability of fermentation in the making of bread. The mathematical model is a simple aging model with two periods: Nonbudding and budding. Based on the result obtained by direct searching, the quality control of bakers' yeast fed-batch culture was performed and confirmed to be experimentally valid.

  19. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi

    PubMed Central

    Giguere, Andrew T.; Murthy, Ganti S.; Bottomley, Peter J.; Sayavedra-Soto, Luis A.

    2018-01-01

    ABSTRACT Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO2, and N2O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi. The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH4+). Up to 60% of NH4+-based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO3−), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO2], and nitrous oxide [N2O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification. PMID:29577088

  20. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi.

    PubMed

    Mellbye, Brett L; Giguere, Andrew T; Murthy, Ganti S; Bottomley, Peter J; Sayavedra-Soto, Luis A; Chaplen, Frank W R

    2018-01-01

    Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO 2 , and N 2 O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi . The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH 4 + ). Up to 60% of NH 4 + -based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO 3 - ), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO 2 ], and nitrous oxide [N 2 O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification.

  1. A cosmology-independent calibration of type Ia supernovae data

    NASA Astrophysics Data System (ADS)

    Hauret, C.; Magain, P.; Biernaux, J.

    2018-06-01

    Recently, the common methodology used to transform type Ia supernovae (SNe Ia) into genuine standard candles has been suffering criticism. Indeed, it assumes a particular cosmological model (namely the flat ΛCDM) to calibrate the standardisation corrections parameters, i.e. the dependency of the supernova peak absolute magnitude on its colour, post-maximum decline rate and host galaxy mass. As a result, this assumption could make the data compliant to the assumed cosmology and thus nullify all works previously conducted on model comparison. In this work, we verify the viability of these hypotheses by developing a cosmology-independent approach to standardise SNe Ia data from the recent JLA compilation. Our resulting corrections turn out to be very close to the ΛCDM-based corrections. Therefore, even if a ΛCDM-based calibration is questionable from a theoretical point of view, the potential compliance of SNe Ia data does not happen in practice for the JLA compilation. Previous works of model comparison based on these data do not have to be called into question. However, as this cosmology-independent standardisation method has the same degree of complexity than the model-dependent one, it is worth using it in future works, especially if smaller samples are considered, such as the superluminous type Ic supernovae.

  2. Investigation of micromixing by acoustically oscillated sharp-edges

    PubMed Central

    Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco

    2016-01-01

    Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel. PMID:27158292

  3. Aircraft applications of fault detection and isolation techniques

    NASA Astrophysics Data System (ADS)

    Marcos Esteban, Andres

    In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.

  4. Detection of food intake from swallowing sequences by supervised and unsupervised methods.

    PubMed

    Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L; Neuman, Michael R; Sazonov, Edward

    2010-08-01

    Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone.

  5. Detection of Food Intake from Swallowing Sequences by Supervised and Unsupervised Methods

    PubMed Central

    Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L.; Neuman, Michael R.; Sazonov, Edward

    2010-01-01

    Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone. PMID:20352335

  6. Investigation of micromixing by acoustically oscillated sharp-edges.

    PubMed

    Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco

    2016-03-01

    Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel.

  7. Dualities in CHL-models

    NASA Astrophysics Data System (ADS)

    Persson, Daniel; Volpato, Roberto

    2018-04-01

    We define a very general class of CHL-models associated with any string theory S (bosonic or supersymmetric) compactified on an internal CFT C× Td . We take the orbifold by a pair (g, δ) , where g is a (possibly non-geometric) symmetry of C and δ is a translation along T n . We analyze the T-dualities of these models and show that in general they contain Atkin–Lehner type symmetries. This generalizes our previous work on N=4 CHL-models based on heterotic string theory on T 6 or type II on K3× T2 , as well as the ‘monstrous’ CHL-models based on a compactification of heterotic string theory on the Frenkel–Lepowsky–Meurman CFT V\

  8. Analysis of Predominance of Sexual Reproduction and Quadruplicity of Bases by Computer Simulation

    NASA Astrophysics Data System (ADS)

    Dasgupta, Subinay

    We have presented elsewhere a model for computer simulation of a colony of individuals reproducing sexually, by meiotic parthenogenesis and by cloning. Our algorithm takes into account food and space restriction, and attacks of some diseases. Each individual is characterized by a string of L ``base'' units, each of which can be of four types (quaternary model) or two types (binary model). Our previous report was for the case of L=12 (quaternary model) and L=24 (binary model) and contained the result that the fluctuation of population was the lowest for sexual reproduction with four types of base units. The present communication reports that the same conclusion also holds for L=10 (quaternary model) and L=20 (binary model), and for L=8 (quaternary model) and L=16 (binary model). This model however, suffers from the drawback that it does not show the effect of aging. A modification of the model was attempted to remove this drawback, but the results were not encouraging.

  9. Nursing theory and concept development: a theoretical model of clinical nurses' intentions to stay in their current positions.

    PubMed

    Cowden, Tracy L; Cummings, Greta G

    2012-07-01

    We describe a theoretical model of staff nurses' intentions to stay in their current positions. The global nursing shortage and high nursing turnover rate demand evidence-based retention strategies. Inconsistent study outcomes indicate a need for testable theoretical models of intent to stay that build on previously published models, are reflective of current empirical research and identify causal relationships between model concepts. Two systematic reviews of electronic databases of English language published articles between 1985-2011. This complex, testable model expands on previous models and includes nurses' affective and cognitive responses to work and their effects on nurses' intent to stay. The concepts of desire to stay, job satisfaction, joy at work, and moral distress are included in the model to capture the emotional response of nurses to their work environments. The influence of leadership is integrated within the model. A causal understanding of clinical nurses' intent to stay and the effects of leadership on the development of that intention will facilitate the development of effective retention strategies internationally. Testing theoretical models is necessary to confirm previous research outcomes and to identify plausible sequences of the development of behavioral intentions. Increased understanding of the causal influences on nurses' intent to stay should lead to strategies that may result in higher retention rates and numbers of nurses willing to work in the health sector. © 2012 Blackwell Publishing Ltd.

  10. Formability prediction for AHSS materials using damage models

    NASA Astrophysics Data System (ADS)

    Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara

    2017-05-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.

  11. Establishing a Multi-scale Stream Gaging Network in the Whitewater River Basin, Kansas, USA

    USGS Publications Warehouse

    Clayton, J.A.; Kean, J.W.

    2010-01-01

    Investigating the routing of streamflow through a large drainage basin requires the determination of discharge at numerous locations in the channel network. Establishing a dense network of stream gages using conventional methods is both cost-prohibitive and functionally impractical for many research projects. We employ herein a previously tested, fluid-mechanically based model for generating rating curves to establish a stream gaging network in the Whitewater River basin in south-central Kansas. The model was developed for the type of channels typically found in this watershed, meaning that it is designed to handle deep, narrow geomorphically stable channels with irregular planforms, and can model overbank flow over a vegetated floodplain. We applied the model to ten previously ungaged stream reaches in the basin, ranging from third- to sixth-order channels. At each site, detailed field measurements of the channel and floodplain morphology, bed and bank roughness, and vegetation characteristics were used to quantify the roughness for a range of flow stages, from low flow to overbank flooding. Rating curves that relate stage to discharge were developed for all ten sites. Both fieldwork and modeling were completed in less than 2 years during an anomalously dry period in the region, which underscores an advantage of using theoretically based (as opposed to empirically based) discharge estimation techniques. ?? 2010 Springer Science+Business Media B.V.

  12. Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks

    PubMed Central

    Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan

    2016-01-01

    To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning. PMID:27608019

  13. Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks.

    PubMed

    Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan

    2016-09-05

    To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning.

  14. Development of global sea ice 6.0 CICE configuration for the Met Office global coupled model

    DOE PAGES

    Rae, J. . G. L; Hewitt, H. T.; Keen, A. B.; ...

    2015-03-05

    The new sea ice configuration GSI6.0, used in the Met Office global coupled configuration GC2.0, is described and the sea ice extent, thickness and volume are compared with the previous configuration and with observationally-based datasets. In the Arctic, the sea ice is thicker in all seasons than in the previous configuration, and there is now better agreement of the modelled concentration and extent with the HadISST dataset. In the Antarctic, a warm bias in the ocean model has been exacerbated at the higher resolution of GC2.0, leading to a large reduction in ice extent and volume; further work is requiredmore » to rectify this in future configurations.« less

  15. Towards an international taxonomy of integrated primary care: a Delphi consensus approach.

    PubMed

    Valentijn, Pim P; Vrijhoef, Hubertus J M; Ruwaard, Dirk; Boesveld, Inge; Arends, Rosa Y; Bruijnzeels, Marc A

    2015-05-22

    Developing integrated service models in a primary care setting is considered an essential strategy for establishing a sustainable and affordable health care system. The Rainbow Model of Integrated Care (RMIC) describes the theoretical foundations of integrated primary care. The aim of this study is to refine the RMIC by developing a consensus-based taxonomy of key features. First, the appropriateness of previously identified key features was retested by conducting an international Delphi study that was built on the results of a previous national Delphi study. Second, categorisation of the features among the RMIC integrated care domains was assessed in a second international Delphi study. Finally, a taxonomy was constructed by the researchers based on the results of the three Delphi studies. The final taxonomy consists of 21 key features distributed over eight integration domains which are organised into three main categories: scope (person-focused vs. population-based), type (clinical, professional, organisational and system) and enablers (functional vs. normative) of an integrated primary care service model. The taxonomy provides a crucial differentiation that clarifies and supports implementation, policy formulation and research regarding the organisation of integrated primary care. Further research is needed to develop instruments based on the taxonomy that can reveal the realm of integrated primary care in practice.

  16. Detection of no-model input-output pairs in closed-loop systems.

    PubMed

    Potts, Alain Segundo; Alvarado, Christiam Segundo Morales; Garcia, Claudio

    2017-11-01

    The detection of no-model input-output (IO) pairs is important because it can speed up the multivariable system identification process, since all the pairs with null transfer functions are previously discarded and it can also improve the identified model quality, thus improving the performance of model based controllers. In the available literature, the methods focus just on the open-loop case, since in this case there is not the effect of the controller forcing the main diagonal in the transfer matrix to one and all the other terms to zero. In this paper, a modification of a previous method able to detect no-model IO pairs in open-loop systems is presented, but adapted to perform this duty in closed-loop systems. Tests are performed by using the traditional methods and the proposed one to show its effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. [Optimization of the parameters of microcirculatory structural adaptation model based on improved quantum-behaved particle swarm optimization algorithm].

    PubMed

    Pan, Qing; Yao, Jialiang; Wang, Ruofan; Cao, Ping; Ning, Gangmin; Fang, Luping

    2017-08-01

    The vessels in the microcirculation keep adjusting their structure to meet the functional requirements of the different tissues. A previously developed theoretical model can reproduce the process of vascular structural adaptation to help the study of the microcirculatory physiology. However, until now, such model lacks the appropriate methods for its parameter settings with subsequent limitation of further applications. This study proposed an improved quantum-behaved particle swarm optimization (QPSO) algorithm for setting the parameter values in this model. The optimization was performed on a real mesenteric microvascular network of rat. The results showed that the improved QPSO was superior to the standard particle swarm optimization, the standard QPSO and the previously reported Downhill algorithm. We conclude that the improved QPSO leads to a better agreement between mathematical simulation and animal experiment, rendering the model more reliable in future physiological studies.

  18. Unified constitutive material models for nonlinear finite-element structural analysis. [gas turbine engine blades and vanes

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Laflen, J. H.; Lindholm, U. S.

    1985-01-01

    Unified constitutive material models were developed for structural analyses of aircraft gas turbine engine components with particular application to isotropic materials used for high-pressure stage turbine blades and vanes. Forms or combinations of models independently proposed by Bodner and Walker were considered. These theories combine time-dependent and time-independent aspects of inelasticity into a continuous spectrum of behavior. This is in sharp contrast to previous classical approaches that partition inelastic strain into uncoupled plastic and creep components. Predicted stress-strain responses from these models were evaluated against monotonic and cyclic test results for uniaxial specimens of two cast nickel-base alloys, B1900+Hf and Rene' 80. Previously obtained tension-torsion test results for Hastelloy X alloy were used to evaluate multiaxial stress-strain cycle predictions. The unified models, as well as appropriate algorithms for integrating the constitutive equations, were implemented in finite-element computer codes.

  19. Reliable Communication Models in Interdependent Critical Infrastructure Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Chinthavali, Supriya; Shankar, Mallikarjun

    Modern critical infrastructure networks are becoming increasingly interdependent where the failures in one network may cascade to other dependent networks, causing severe widespread national-scale failures. A number of previous efforts have been made to analyze the resiliency and robustness of interdependent networks based on different models. However, communication network, which plays an important role in today's infrastructures to detect and handle failures, has attracted little attention in the interdependency studies, and no previous models have captured enough practical features in the critical infrastructure networks. In this paper, we study the interdependencies between communication network and other kinds of critical infrastructuremore » networks with an aim to identify vulnerable components and design resilient communication networks. We propose several interdependency models that systematically capture various features and dynamics of failures spreading in critical infrastructure networks. We also discuss several research challenges in building reliable communication solutions to handle failures in these models.« less

  20. Research on Capturing of Customer Requirements Based on Innovation Theory

    NASA Astrophysics Data System (ADS)

    junwu, Ding; dongtao, Yang; zhenqiang, Bao

    To exactly and effectively capture customer requirements information, a new customer requirements capturing modeling method was proposed. Based on the analysis of function requirement models of previous products and the application of technology system evolution laws of the Theory of Innovative Problem Solving (TRIZ), the customer requirements could be evolved from existing product designs, through modifying the functional requirement unit and confirming the direction of evolution design. Finally, a case study was provided to illustrate the feasibility of the proposed approach.

  1. Work stress, role conflict, social support, and psychological burnout among teachers.

    PubMed

    Burke, R J; Greenglass, E

    1993-10-01

    This study examined a research model developed to understand psychological burnout among school-based educators. Data were collected from 833 school-based educators using questionnaires completed anonymously. Four groups of predictor variables identified in previous research were considered: individual demographic and situational variables, work stressors, role conflict, and social support. Some support for the model was found. Work stressors were strong predictors of psychological burnout. Individual demographic characteristics, role conflict, and social support had little effect on psychological burnout.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    Previously the SURFplus reactive burn model was calibrated for the TATB based explosive PBX 9502. The calibration was based on fitting Pop plot data, the failure diameter and the limiting detonation speed, and curvature effect data for small curvature. The model failure diameter is determined utilizing 2-D simulations of an unconfined rate stick to find the minimum diameter for which a detonation wave propagates. Here we examine the effect of mesh resolution on an unconfined rate stick with a diameter (10mm) slightly greater than the measured failure diameter (8 to 9 mm).

  3. Estimation of Foot Plantar Center of Pressure Trajectories with Low-Cost Instrumented Insoles Using an Individual-Specific Nonlinear Model.

    PubMed

    Hu, Xinyao; Zhao, Jun; Peng, Dongsheng; Sun, Zhenglong; Qu, Xingda

    2018-02-01

    Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial-lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior-posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly.

  4. Estimation of Foot Plantar Center of Pressure Trajectories with Low-Cost Instrumented Insoles Using an Individual-Specific Nonlinear Model

    PubMed Central

    Hu, Xinyao; Zhao, Jun; Peng, Dongsheng

    2018-01-01

    Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial–lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior–posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly. PMID:29389857

  5. Temporal patterns and a disease forecasting model of dengue hemorrhagic fever in Jakarta based on 10 years of surveillance data.

    PubMed

    Sitepu, Monika S; Kaewkungwal, Jaranit; Luplerdlop, Nathanej; Soonthornworasiri, Ngamphol; Silawan, Tassanee; Poungsombat, Supawadee; Lawpoolsri, Saranath

    2013-03-01

    This study aimed to describe the temporal patterns of dengue transmission in Jakarta from 2001 to 2010, using data from the national surveillance system. The Box-Jenkins forecasting technique was used to develop a seasonal autoregressive integrated moving average (SARIMA) model for the study period and subsequently applied to forecast DHF incidence in 2011 in Jakarta Utara, Jakarta Pusat, Jakarta Barat, and the municipalities of Jakarta Province. Dengue incidence in 2011, based on the forecasting model was predicted to increase from the previous year.

  6. Mathematical Analysis for Non-reciprocal-interaction-based Model of Collective Behavior

    NASA Astrophysics Data System (ADS)

    Kano, Takeshi; Osuka, Koichi; Kawakatsu, Toshihiro; Ishiguro, Akio

    2017-12-01

    In many natural and social systems, collective behaviors emerge as a consequence of non-reciprocal interaction between their constituents. As a first step towards understanding the core principle that underlies these phenomena, we previously proposed a minimal model of collective behavior based on non-reciprocal interactions by drawing inspiration from friendship formation in human society, and demonstrated via simulations that various non-trivial patterns emerge by changing parameters. In this study, a mathematical analysis of the proposed model wherein the system size is small is performed. Through the analysis, the mechanism of the transition between several patterns is elucidated.

  7. Fracture control of ground water flow and water chemistry in a rock aquitard.

    PubMed

    Eaton, Timothy T; Anderson, Mary P; Bradbury, Kenneth R

    2007-01-01

    There are few studies on the hydrogeology of sedimentary rock aquitards although they are important controls in regional ground water flow systems. We formulate and test a three-dimensional (3D) conceptual model of ground water flow and hydrochemistry in a fractured sedimentary rock aquitard to show that flow dynamics within the aquitard are more complex than previously believed. Similar conceptual models, based on regional observations and recently emerging principles of mechanical stratigraphy in heterogeneous sedimentary rocks, have previously been applied only to aquifers, but we show that they are potentially applicable to aquitards. The major elements of this conceptual model, which is based on detailed information from two sites in the Maquoketa Formation in southeastern Wisconsin, include orders of magnitude contrast between hydraulic diffusivity (K/S(s)) of fractured zones and relatively intact aquitard rock matrix, laterally extensive bedding-plane fracture zones extending over distances of over 10 km, very low vertical hydraulic conductivity of thick shale-rich intervals of the aquitard, and a vertical hydraulic head profile controlled by a lateral boundary at the aquitard subcrop, where numerous surface water bodies dominate the shallow aquifer system. Results from a 3D numerical flow model based on this conceptual model are consistent with field observations, which did not fit the typical conceptual model of strictly vertical flow through an aquitard. The 3D flow through an aquitard has implications for predicting ground water flow and for planning and protecting water supplies.

  8. Fracture control of ground water flow and water chemistry in a rock aquitard

    USGS Publications Warehouse

    Eaton, T.T.; Anderson, M.P.; Bradbury, K.R.

    2007-01-01

    There are few studies on the hydrogeology of sedimentary rock aquitards although they are important controls in regional ground water flow systems. We formulate and test a three-dimensional (3D) conceptual model of ground water flow and hydrochemistry in a fractured sedimentary rock aquitard to show that flow dynamics within the aquitard are more complex than previously believed. Similar conceptual models, based on regional observations and recently emerging principles of mechanical stratigraphy in heterogeneous sedimentary rocks, have previously been applied only to aquifers, but we show that they are potentially applicable to aquitards. The major elements of this conceptual model, which is based on detailed information from two sites in the Maquoketa Formation in southeastern Wisconsin, include orders of magnitude contrast between hydraulic diffusivity (K/Ss) of fractured zones and relatively intact aquitard rock matrix, laterally extensive bedding-plane fracture zones extending over distances of over 10 km, very low vertical hydraulic conductivity of thick shale-rich intervals of the aquitard, and a vertical hydraulic head profile controlled by a lateral boundary at the aquitard subcrop, where numerous surface water bodies dominate the shallow aquifer system. Results from a 3D numerical flow model based on this conceptual model are consistent with field observations, which did not fit the typical conceptual model of strictly vertical flow through an aquitard. The 3D flow through an aquitard has implications for predicting ground water flow and for planning and protecting water supplies. ?? 2007 National Ground Water Association.

  9. Base drag prediction on missile configurations

    NASA Technical Reports Server (NTRS)

    Moore, F. G.; Hymer, T.; Wilcox, F.

    1993-01-01

    New wind tunnel data have been taken, and a new empirical model has been developed for predicting base drag on missile configurations. The new wind tunnel data were taken at NASA-Langley in the Unitary Wind Tunnel at Mach numbers from 2.0 to 4.5, angles of attack to 16 deg, fin control deflections up to 20 deg, fin thickness/chord of 0.05 to 0.15, and fin locations from 'flush with the base' to two chord-lengths upstream of the base. The empirical model uses these data along with previous wind tunnel data, estimating base drag as a function of all these variables as well as boat-tail and power-on/power-off effects. The new model yields improved accuracy, compared to wind tunnel data. The new model also is more robust due to inclusion of additional variables. On the other hand, additional wind tunnel data are needed to validate or modify the current empirical model in areas where data are not available.

  10. Shipborne LF-VLF oceanic lightning observations and modeling

    NASA Astrophysics Data System (ADS)

    Zoghzoghy, F. G.; Cohen, M. B.; Said, R. K.; Lehtinen, N. G.; Inan, U. S.

    2015-10-01

    Approximately 90% of natural lightning occurs over land, but recent observations, using Global Lightning Detection (GLD360) geolocation peak current estimates and satellite optical data, suggested that cloud-to-ground flashes are on average stronger over the ocean. We present initial statistics from a novel experiment using a Low Frequency (LF) magnetic field receiver system installed aboard the National Oceanic Atmospheric Agency (NOAA) Ronald W. Brown research vessel that allowed the detection of impulsive radio emissions from deep-oceanic discharges at short distances. Thousands of LF waveforms were recorded, facilitating the comparison of oceanic waveforms to their land counterparts. A computationally efficient electromagnetic radiation model that accounts for propagation over lossy and curved ground is constructed and compared with previously published models. We include the effects of Earth curvature on LF ground wave propagation and quantify the effects of channel-base current risetime, channel-base current falltime, and return stroke speed on the radiated LF waveforms observed at a given distance. We compare simulation results to data and conclude that previously reported larger GLD360 peak current estimates over the ocean are unlikely to fully result from differences in channel-base current risetime, falltime, or return stroke speed between ocean and land flashes.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savy, J.

    New design and evaluation guidelines for department of energy facilities subjected to natural phenomena hazard, are being finalized. Although still in draft form at this time, the document describing those guidelines should be considered to be an update of previously available guidelines. The recommendations in the guidelines document mentioned above, and simply referred to as the guidelines'' thereafter, are based on the best information at the time of its development. In particular, the seismic hazard model for the Princeton site was based on a study performed in 1981 for Lawrence Livermore National Laboratory (LLNL), which relied heavily on the resultsmore » of the NRC's Systematic Evaluation Program and was based on a methodology and data sets developed in 1977 and 1978. Considerable advances have been made in the last ten years in the domain of seismic hazard modeling. Thus, it is recommended to update the estimate of the seismic hazard at the DOE sites whenever possible. The major differences between previous estimates and the ones proposed in this study for the PPPL are in the modeling of the strong ground motion at the site, and the treatment of the total uncertainty in the estimates to include knowledge uncertainty, random uncertainty, and expert opinion diversity as well. 28 refs.« less

  12. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    PubMed Central

    Vesperini, Fabio; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121

  13. Predictive Virtual Infection Modeling of Fungal Immune Evasion in Human Whole Blood.

    PubMed

    Prauße, Maria T E; Lehnert, Teresa; Timme, Sandra; Hünniger, Kerstin; Leonhardt, Ines; Kurzai, Oliver; Figge, Marc Thilo

    2018-01-01

    Bloodstream infections by the human-pathogenic fungi Candida albicans and Candida glabrata increasingly occur in hospitalized patients and are associated with high mortality rates. The early immune response against these fungi in human blood comprises a concerted action of humoral and cellular components of the innate immune system. Upon entering the blood, the majority of fungal cells will be eliminated by innate immune cells, i.e., neutrophils and monocytes. However, recent studies identified a population of fungal cells that can evade the immune response and thereby may disseminate and cause organ dissemination, which is frequently observed during candidemia. In this study, we investigate the so far unresolved mechanism of fungal immune evasion in human whole blood by testing hypotheses with the help of mathematical modeling. We use a previously established state-based virtual infection model for whole-blood infection with C. albicans to quantify the immune response and identified the fungal immune-evasion mechanism. While this process was assumed to be spontaneous in the previous model, we now hypothesize that the immune-evasion process is mediated by host factors and incorporate such a mechanism in the model. In particular, we propose, based on previous studies that the fungal immune-evasion mechanism could possibly arise through modification of the fungal surface by as of yet unknown proteins that are assumed to be secreted by activated neutrophils. To validate or reject any of the immune-evasion mechanisms, we compared the simulation of both immune-evasion models for different infection scenarios, i.e., infection of whole blood with either C. albicans or C. glabrata under non-neutropenic and neutropenic conditions. We found that under non-neutropenic conditions, both immune-evasion models fit the experimental data from whole-blood infection with C. albicans and C. glabrata . However, differences between the immune-evasion models could be observed for the infection outcome under neutropenic conditions with respect to the distribution of fungal cells across the immune cells. Based on these predictions, we suggested specific experimental studies that might allow for the validation or rejection of the proposed immune-evasion mechanism.

  14. Predictive Virtual Infection Modeling of Fungal Immune Evasion in Human Whole Blood

    PubMed Central

    Prauße, Maria T. E.; Lehnert, Teresa; Timme, Sandra; Hünniger, Kerstin; Leonhardt, Ines; Kurzai, Oliver; Figge, Marc Thilo

    2018-01-01

    Bloodstream infections by the human-pathogenic fungi Candida albicans and Candida glabrata increasingly occur in hospitalized patients and are associated with high mortality rates. The early immune response against these fungi in human blood comprises a concerted action of humoral and cellular components of the innate immune system. Upon entering the blood, the majority of fungal cells will be eliminated by innate immune cells, i.e., neutrophils and monocytes. However, recent studies identified a population of fungal cells that can evade the immune response and thereby may disseminate and cause organ dissemination, which is frequently observed during candidemia. In this study, we investigate the so far unresolved mechanism of fungal immune evasion in human whole blood by testing hypotheses with the help of mathematical modeling. We use a previously established state-based virtual infection model for whole-blood infection with C. albicans to quantify the immune response and identified the fungal immune-evasion mechanism. While this process was assumed to be spontaneous in the previous model, we now hypothesize that the immune-evasion process is mediated by host factors and incorporate such a mechanism in the model. In particular, we propose, based on previous studies that the fungal immune-evasion mechanism could possibly arise through modification of the fungal surface by as of yet unknown proteins that are assumed to be secreted by activated neutrophils. To validate or reject any of the immune-evasion mechanisms, we compared the simulation of both immune-evasion models for different infection scenarios, i.e., infection of whole blood with either C. albicans or C. glabrata under non-neutropenic and neutropenic conditions. We found that under non-neutropenic conditions, both immune-evasion models fit the experimental data from whole-blood infection with C. albicans and C. glabrata. However, differences between the immune-evasion models could be observed for the infection outcome under neutropenic conditions with respect to the distribution of fungal cells across the immune cells. Based on these predictions, we suggested specific experimental studies that might allow for the validation or rejection of the proposed immune-evasion mechanism. PMID:29619027

  15. Efficient model checking of network authentication protocol based on SPIN

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan

    2013-03-01

    Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.

  16. Reconsidering the use of rankings in the valuation of health states: a model for estimating cardinal values from ordinal data

    PubMed Central

    Salomon, Joshua A

    2003-01-01

    Background In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO) or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. Methods Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression) with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC) between predictions and mean observations, and the root mean squared error of predictions at the individual level. Results Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99). Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. Conclusions Modeling health-state valuations based on ordinal ranks can provide results that are similar to those obtained from more widely analyzed valuation techniques such as the TTO. The information content in aggregate ranking data is not currently exploited to full advantage. The possibility of estimating cardinal valuations from ordinal ranks could also simplify future data collection dramatically and facilitate wider empirical study of health-state valuations in diverse settings and population groups. PMID:14687419

  17. Children's Solution Processes in Elementary Arithmetic Problems: Analysis and Improvement. Report No. 19.

    ERIC Educational Resources Information Center

    De Corte, Erik; Verschaffel, Lieven

    Design and results of an investigation attempting to analyze and improve children's solution processes in elementary addition and subtraction problems are described. As background for the study, a conceptual model was developed based on previous research. One dimension of the model relates to the characteristics of the tasks (numerical versus word…

  18. Using satellite-based estimates of evapotranspiration and groundwater changes to determine anthropogenic water fluxes in land surface models

    USDA-ARS?s Scientific Manuscript database

    Irrigation is a widely used water management practice that is often poorly parameterized in land surface and climate models. Previous studies have addressed this issue via use of irrigation area, applied water inventory data, or soil moisture content. These approaches have a variety of drawbacks i...

  19. Neurogenesis Interferes with the Retrieval of Remote Memories: Forgetting in Neurocomputational Terms

    ERIC Educational Resources Information Center

    Weisz, Victoria I.; Argibay, Pablo F.

    2012-01-01

    In contrast to models and theories that relate adult neurogenesis with the processes of learning and memory, almost no solid hypotheses have been formulated that involve a possible neurocomputational influence of adult neurogenesis on forgetting. Based on data from a previous study that implemented a simple but complete model of the main…

  20. Tutoring at a Distance: Modelling as a Tool to Control Chaos

    ERIC Educational Resources Information Center

    Bertin, Jean-Claude; Narcy-Combes, Jean-Paul

    2012-01-01

    This article builds on a previous article published in 2007, which aimed at clarifying the concept of tutoring. Based on a new epistemological stance (emergentism) the authors will here show how the various components of the computer-assisted language learning situation form a complex chaotic system. They advocate that modelling is a way of…

  1. Measurement Equivalence of Teachers' Sense of Efficacy Scale Using Latent Growth Methods

    ERIC Educational Resources Information Center

    Basokçu, T. Oguz; Ögretmen, T.

    2016-01-01

    This study is based on the application of latent growth modeling, which is one of structural equation models on real data. Teachers' Sense of Efficacy Scale (TSES), which was previously adapted into Turkish was administered to 200 preservice teachers at different time intervals for three times and study data was collected. Measurement equivalence…

  2. Flexible Electronics-Based Transformers for Extreme Environments

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.; Stoica, Adrian; Ingham, Michel; Thakur, Anubhav

    2015-01-01

    This paper provides a survey of the use of modular multifunctional systems, called Flexible Transformers, to facilitate the exploration of extreme and previously inaccessible environments. A novel dynamics and control model of a modular algorithm for assembly, folding, and unfolding of these innovative structural systems is also described, together with the control model and the simulation results.

  3. Wildfire risk and housing prices: a case study from Colorado Springs.

    Treesearch

    G.H. Donovan; P.A. Champ; D.T. Butry

    2007-01-01

    Unlike other natural hazards such as floods, hurricanes, and earthquakes, wildfire risk has not previously been examined using a hedonic property value model. In this article, we estimate a hedonic model based on parcel-level wildfire risk ratings from Colorado Springs. We found that providing homeowners with specific information about the wildfire risk rating of their...

  4. Changes in Personality Disorder Traits Following 2 Years of Treatment in a Secure Therapeutic Community Milieu

    ERIC Educational Resources Information Center

    Morrissey, Catrin; Taylor, Jon

    2014-01-01

    Therapeutic community treatment models have not previously been applied to forensic patients with mild intellectual disabilities (IDs) with a comorbid diagnosis of personality disorder. Thirteen patients with mild IDs were allocated to a unit within a high secure psychiatric service operating a model of treatment based on the principles and…

  5. Validation of a Cognitive Diagnostic Model across Multiple Forms of a Reading Comprehension Assessment

    ERIC Educational Resources Information Center

    Clark, Amy K.

    2013-01-01

    The present study sought to fit a cognitive diagnostic model (CDM) across multiple forms of a passage-based reading comprehension assessment using the attribute hierarchy method. Previous research on CDMs for reading comprehension assessments served as a basis for the attributes in the hierarchy. The two attribute hierarchies were fit to data from…

  6. Integrated Formal Analysis of Timed-Triggered Ethernet

    NASA Technical Reports Server (NTRS)

    Dutertre, Bruno; Shankar, Nstarajan; Owre, Sam

    2012-01-01

    We present new results related to the verification of the Timed-Triggered Ethernet (TTE) clock synchronization protocol. This work extends previous verification of TTE based on model checking. We identify a suboptimal design choice in a compression function used in clock synchronization, and propose an improvement. We compare the original design and the improved definition using the SAL model checker.

  7. Career Goals in Young Adults: Personal Resources, Goal Appraisals, Attitudes, and Goal Management Strategies

    ERIC Educational Resources Information Center

    Haratsis, Jessica M.; Hood, Michelle; Creed, Peter A.

    2015-01-01

    We tested a model based on the dual-process framework that assessed the relationships among personal resources, career goal appraisals, career attitudes, and career goal management, which have not been previously assessed together. The model (tested on a sample of 486 young adults: 74% female, M[subscript]age = 22 years) proposed that personal…

  8. Collisional spreading of Enceladus’ neutral cloud

    NASA Astrophysics Data System (ADS)

    Cassidy, T. A.; Johnson, R. E.

    2010-10-01

    We describe a direct simulation Monte Carlo (DSMC) model of Enceladus' neutral cloud and compare its results to observations of OH and O orbiting Saturn. The OH and O are observed far from Enceladus (at 3.95 R S), as far out as 25 R S for O. Previous DSMC models attributed this breadth primarily to ion/neutral scattering (including charge exchange) and molecular dissociation. However, the newly reported O observations and a reinterpretation of the OH observations (Melin, H., Shemansky, D.E., Liu, X. [2009] Planet. Space Sci., 57, 1743-1753, PS&S) showed that the cloud is broader than previously thought. We conclude that the addition of neutral/neutral scattering (Farmer, A.J. [2009] Icarus, 202, 280-286), which was underestimated by previous models, brings the model results in line with the new observations. Neutral/neutral collisions primarily happen in the densest part of the cloud, near Enceladus' orbit, but contribute to the spreading by pumping up orbital eccentricity. Based on the cloud model presented here Enceladus maybe the ultimate source of oxygen for the upper atmospheres of Titan and Saturn. We also predict that large quantities of OH, O and H 2O bombard Saturn's icy satellites.

  9. Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Shapiro, Gerald

    1998-01-01

    This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.

  10. Cortical circuitry implementing graphical models.

    PubMed

    Litvak, Shai; Ullman, Shimon

    2009-11-01

    In this letter, we develop and simulate a large-scale network of spiking neurons that approximates the inference computations performed by graphical models. Unlike previous related schemes, which used sum and product operations in either the log or linear domains, the current model uses an inference scheme based on the sum and maximization operations in the log domain. Simulations show that using these operations, a large-scale circuit, which combines populations of spiking neurons as basic building blocks, is capable of finding close approximations to the full mathematical computations performed by graphical models within a few hundred milliseconds. The circuit is general in the sense that it can be wired for any graph structure, it supports multistate variables, and it uses standard leaky integrate-and-fire neuronal units. Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, we focus on the cortical microcircuitry and propose how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation. We discuss in particular the roles of three major types of inhibitory neurons (small fast-spiking basket cells, large layer 2/3 basket cells, and double-bouquet neurons), subpopulations of strongly interconnected neurons with their unique connectivity patterns in different cortical layers, and the possible role of minicolumns in the realization of the population-based maximum operation.

  11. Modelling eye movements in a categorical search task

    PubMed Central

    Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris

    2013-01-01

    We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720

  12. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection

    PubMed Central

    Baston, Chiara

    2015-01-01

    The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments. PMID:26640481

  13. Descriptive Modeling of the Dynamical Systems and Determination of Feedback Homeostasis at Different Levels of Life Organization.

    PubMed

    Zholtkevych, G N; Nosov, K V; Bespalov, Yu G; Rak, L I; Abhishek, M; Vysotskaya, E V

    2018-05-24

    The state-of-art research in the field of life's organization confronts the need to investigate a number of interacting components, their properties and conditions of sustainable behaviour within a natural system. In biology, ecology and life sciences, the performance of such stable system is usually related to homeostasis, a property of the system to actively regulate its state within a certain allowable limits. In our previous work, we proposed a deterministic model for systems' homeostasis. The model was based on dynamical system's theory and pairwise relationships of competition, amensalism and antagonism taken from theoretical biology and ecology. However, the present paper proposes a different dimension to our previous results based on the same model. In this paper, we introduce the influence of inter-component relationships in a system, wherein the impact is characterized by direction (neutral, positive, or negative) as well as its (absolute) value, or strength. This makes the model stochastic which, in our opinion, is more consistent with real-world elements affected by various random factors. The case study includes two examples from areas of hydrobiology and medicine. The models acquired for these cases enabled us to propose a convincing explanation for corresponding phenomena identified by different types of natural systems.

  14. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    PubMed Central

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  15. Cable testing for Fermilab's high field magnets using small racetrack coils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feher, S.; Ambrosio, G.; Andreev, N.

    As part of the High Field Magnet program at Fermilab simple magnets have been designed utilizing small racetrack coils based on a sound mechanical structure and bladder technique developed by LBNL. Two of these magnets have been built in order to test Nb{sub 3}Sn cables used in cos-theta dipole models. The powder-in-tube strand based cable exhibited excellent performance. It reached its critical current limit within 14 quenches. Modified jelly roll strand based cable performance was limited by magnetic instabilities at low fields as previously tested dipole models which used similar cable.

  16. The Development of the Speaker Independent ARM Continuous Speech Recognition System

    DTIC Science & Technology

    1992-01-01

    spokeTi airborne reconnaissance reports u-ing a speech recognition system based on phoneme-level hidden Markov models (HMMs). Previous versions of the ARM...will involve automatic selection from multiple model sets, corresponding to different speaker types, and that the most rudimen- tary partition of a...The vocabulary size for the ARM task is 497 words. These words are related to the phoneme-level symbols corresponding to the models in the model set

  17. Development of the Mathematical Model for Ingot Quality Forecasting with Consideration of Thermal and Physical Characteristics of Mould Powder

    NASA Astrophysics Data System (ADS)

    Anisimov, K. N.; Loginov, A. M.; Gusev, M. P.; Zarubin, S. V.; Nikonov, S. V.; Krasnov, A. V.

    2017-12-01

    This paper presents the results of physical modelling of the mould powder skull in the gap between an ingot and the mould. Based on the results obtained from this and previous works, the mathematical model of mould powder behaviour in the gap and its influence on formation of surface defects was developed. The results of modelling satisfactorily conform to the industrial data on ingot surface defects.

  18. a Theoretical Analysis of Physical Properties of Aqueous Trehalose with Borax

    NASA Astrophysics Data System (ADS)

    Sahara; Aniya, Masaru

    2013-07-01

    The temperature and composition dependence of the viscosity of aqueous trehalose and aqueous trehalose-borax mixtures has been investigated by means of the Bond Strength-Coordination Number Fluctuation (BSCNF) model. The result indicates that the variation in the fragility of the system is very small in the composition range analyzed. The values of the materials parameters determined are consistent with those of the trehalose-water-lithium iodide system which were analyzed in a previous study. Based on the analysis of the obtained parameters of the BSCNF model, the physical interpretation of the WLF parameters reported in a previous study is reconfirmed.

  19. An event-version-based spatio-temporal modeling approach and its application in the cadastral management

    NASA Astrophysics Data System (ADS)

    Li, Yangdong; Han, Zhen; Liao, Zhongping

    2009-10-01

    Spatiality, temporality, legality, accuracy and continuality are characteristic of cadastral information, and the cadastral management demands that the cadastral data should be accurate, integrated and updated timely. It's a good idea to build an effective GIS management system to manage the cadastral data which are characterized by spatiality and temporality. Because no sound spatio-temporal data models have been adopted, however, the spatio-temporal characteristics of cadastral data are not well expressed in the existing cadastral management systems. An event-version-based spatio-temporal modeling approach is first proposed from the angle of event and version. Then with the help of it, an event-version-based spatio-temporal cadastral data model is built to represent spatio-temporal cadastral data. At last, the previous model is used in the design and implementation of a spatio-temporal cadastral management system. The result of the application of the system shows that the event-version-based spatio-temporal data model is very suitable for the representation and organization of cadastral data.

  20. A Bayesian compound stochastic process for modeling nonstationary and nonhomogeneous sequence evolution.

    PubMed

    Blanquart, Samuel; Lartillot, Nicolas

    2006-11-01

    Variations of nucleotidic composition affect phylogenetic inference conducted under stationary models of evolution. In particular, they may cause unrelated taxa sharing similar base composition to be grouped together in the resulting phylogeny. To address this problem, we developed a nonstationary and nonhomogeneous model accounting for compositional biases. Unlike previous nonstationary models, which are branchwise, that is, assume that base composition only changes at the nodes of the tree, in our model, the process of compositional drift is totally uncoupled from the speciation events. In addition, the total number of events of compositional drift distributed across the tree is directly inferred from the data. We implemented the method in a Bayesian framework, relying on Markov Chain Monte Carlo algorithms, and applied it to several nucleotidic data sets. In most cases, the stationarity assumption was rejected in favor of our nonstationary model. In addition, we show that our method is able to resolve a well-known artifact. By Bayes factor evaluation, we compared our model with 2 previously developed nonstationary models. We show that the coupling between speciations and compositional shifts inherent to branchwise models may lead to an overparameterization, resulting in a lesser fit. In some cases, this leads to incorrect conclusions, concerning the nature of the compositional biases. In contrast, our compound model more flexibly adapts its effective number of parameters to the data sets under investigation. Altogether, our results show that accounting for nonstationary sequence evolution may require more elaborate and more flexible models than those currently used.

  1. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  2. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  3. Statistical prediction of September Arctic Sea Ice minimum based on stable teleconnections with global climate and oceanic patterns

    NASA Astrophysics Data System (ADS)

    Ionita, M.; Grosfeld, K.; Scholz, P.; Lohmann, G.

    2016-12-01

    Sea ice in both Polar Regions is an important indicator for the expression of global climate change and its polar amplification. Consequently, a broad information interest exists on sea ice, its coverage, variability and long term change. Knowledge on sea ice requires high quality data on ice extent, thickness and its dynamics. However, its predictability depends on various climate parameters and conditions. In order to provide insights into the potential development of a monthly/seasonal signal, we developed a robust statistical model based on ocean heat content, sea surface temperature and atmospheric variables to calculate an estimate of the September minimum sea ice extent for every year. Although previous statistical attempts at monthly/seasonal forecasts of September sea ice minimum show a relatively reduced skill, here it is shown that more than 97% (r = 0.98) of the September sea ice extent can predicted three months in advance by using previous months conditions via a multiple linear regression model based on global sea surface temperature (SST), mean sea level pressure (SLP), air temperature at 850hPa (TT850), surface winds and sea ice extent persistence. The statistical model is based on the identification of regions with stable teleconnections between the predictors (climatological parameters) and the predictand (here sea ice extent). The results based on our statistical model contribute to the sea ice prediction network for the sea ice outlook report (https://www.arcus.org/sipn) and could provide a tool for identifying relevant regions and climate parameters that are important for the sea ice development in the Arctic and for detecting sensitive and critical regions in global coupled climate models with focus on sea ice formation.

  4. Haplotype-based approach to known MS-associated regions increases the amount of explained risk

    PubMed Central

    Khankhanian, Pouya; Gourraud, Pierre-Antoine; Lizee, Antoine; Goodin, Douglas S

    2015-01-01

    Genome-wide association studies (GWAS), using single nucleotide polymorphisms (SNPs), have yielded 110 non-human leucocyte antigen genomic regions that are associated with multiple sclerosis (MS). Despite this large number of associations, however, only 28% of MS-heritability can currently be explained. Here we compare the use of multi-SNP-haplotypes to the use of single-SNPs as alternative methods to describe MS genetic risk. SNP-haplotypes (of various lengths from 1 up to 15 contiguous SNPs) were constructed at each of the 110 previously identified, MS-associated, genomic regions. Even after correcting for the larger number of statistical comparisons made when using the haplotype-method, in 32 of the regions, the SNP-haplotype based model was markedly more significant than the single-SNP based model. By contrast, in no region was the single-SNP based model similarly more significant than the SNP-haplotype based model. Moreover, when we included the 932 MS-associated SNP-haplotypes (that we identified from 102 regions) as independent variables into a logistic linear model, the amount of MS-heritability, as assessed by Nagelkerke's R-squared, was 38%, which was considerably better than 29%, which was obtained by using only single-SNPs. This study demonstrates that SNP-haplotypes can be used to fine-map the genetic associations within regions of interest previously identified by single-SNP GWAS. Moreover, the amount of the MS genetic risk explained by the SNP-haplotype associations in the 110 MS-associated genomic regions was considerably greater when using SNP-haplotypes than when using single-SNPs. Also, the use of SNP-haplotypes can lead to the discovery of new regions of interest, which have not been identified by a single-SNP GWAS. PMID:26185143

  5. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression

    PubMed Central

    Weiss, Brandi A.; Dardick, William

    2015-01-01

    This article introduces an entropy-based measure of data–model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data–model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data–model fit to assess how well logistic regression models classify cases into observed categories. PMID:29795897

  6. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression.

    PubMed

    Weiss, Brandi A; Dardick, William

    2016-12-01

    This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data-model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data-model fit to assess how well logistic regression models classify cases into observed categories.

  7. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection.

    PubMed

    Chai, Bian-fang; Yu, Jian; Jia, Cai-Yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  8. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection

    NASA Astrophysics Data System (ADS)

    Chai, Bian-fang; Yu, Jian; Jia, Cai-yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  9. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields

    PubMed Central

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin

    2015-01-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280

  10. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  11. A Variable-Instar Climate-Driven Individual Beetle-Based Phenology Model for the Invasive Asian Longhorned Beetle (Coleoptera: Cerambycidae).

    PubMed

    Trotter, R Talbot; Keena, Melody A

    2016-12-01

    Efforts to manage and eradicate invasive species can benefit from an improved understanding of the physiology, biology, and behavior of the target species, and ongoing efforts to eradicate the Asian longhorned beetle (Anoplophora glabripennis Motschulsky) highlight the roles this information may play. Here, we present a climate-driven phenology model for A. glabripennis that provides simulated life-tables for populations of individual beetles under variable climatic conditions that takes into account the variable number of instars beetles may undergo as larvae. Phenology parameters in the model are based on a synthesis of published data and studies of A. glabripennis, and the model output was evaluated using a laboratory-reared population maintained under varying temperatures mimicking those typical of Central Park in New York City. The model was stable under variations in population size, simulation length, and the Julian dates used to initiate individual beetles within the population. Comparison of model results with previously published field-based phenology studies in native and invasive populations indicates both this new phenology model, and the previously published heating-degree-day model show good agreement in the prediction of the beginning of the flight season for adults. However, the phenology model described here avoids underpredicting the cumulative emergence of adults through the season, in addition to providing tables of life stages and estimations of voltinism for local populations. This information can play a key role in evaluating risk by predicting the potential for population growth, and may facilitate the optimization of management and eradication efforts. Published by Oxford University Press on behalf of Entomological Society of America 2016. This work is written by US Government employees and is in the public domain in the US.

  12. Sample classification for improved performance of PLS models applied to the quality control of deep-frying oils of different botanic origins analyzed using ATR-FTIR spectroscopy.

    PubMed

    Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel

    2011-01-01

    The selection of an appropriate calibration set is a critical step in multivariate method development. In this work, the effect of using different calibration sets, based on a previous classification of unknown samples, on the partial least squares (PLS) regression model performance has been discussed. As an example, attenuated total reflection (ATR) mid-infrared spectra of deep-fried vegetable oil samples from three botanical origins (olive, sunflower, and corn oil), with increasing polymerized triacylglyceride (PTG) content induced by a deep-frying process were employed. The use of a one-class-classifier partial least squares-discriminant analysis (PLS-DA) and a rooted binary directed acyclic graph tree provided accurate oil classification. Oil samples fried without foodstuff could be classified correctly, independent of their PTG content. However, class separation of oil samples fried with foodstuff, was less evident. The combined use of double-cross model validation with permutation testing was used to validate the obtained PLS-DA classification models, confirming the results. To discuss the usefulness of the selection of an appropriate PLS calibration set, the PTG content was determined by calculating a PLS model based on the previously selected classes. In comparison to a PLS model calculated using a pooled calibration set containing samples from all classes, the root mean square error of prediction could be improved significantly using PLS models based on the selected calibration sets using PLS-DA, ranging between 1.06 and 2.91% (w/w).

  13. RESIDUAL RISK ASSESSMENT: HALOGENATED SOLVENTS

    EPA Science Inventory

    This source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation for Halogenated Solvent Degreasing Facilities. These assessments utilize existing models and d...

  14. The dynamics and control of large flexible space structures, 6

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.

    1983-01-01

    The controls analysis based on a truncated finite element model of the 122m. Hoop/Column Antenna System focuses on an analysis of the controllability as well as the synthesis of control laws. Graph theoretic techniques are employed to consider controllability for different combinations of number and locations of actuators. Control law synthesis is based on an application of the linear regulator theory as well as pole placement techniques. Placement of an actuator on the hoop can result in a noticeable improvement in the transient characteristics. The problem of orientation and shape control of an orbiting flexible beam, previously examined, is now extended to include the influence of solar radiation environmental forces. For extremely flexible thin structures modification of control laws may be required and techniques for accomplishing this are explained. Effects of environmental torques are also included in previously developed models of orbiting flexible thin platforms.

  15. Implementation and modification of a three-dimensional radiation stress formulation for surf zone and rip-current applications

    USGS Publications Warehouse

    Kumar, N.; Voulgaris, G.; Warner, John C.

    2011-01-01

    Regional Ocean Modeling System (ROMS v 3.0), a three-dimensional numerical ocean model, was previously enhanced for shallow water applications by including wave-induced radiation stress forcing provided through coupling to wave propagation models (SWAN, REF/DIF). This enhancement made it suitable for surf zone applications as demonstrated using examples of obliquely incident waves on a planar beach and rip current formation in longshore bar trough morphology (Haas and Warner, 2009). In this contribution, we present an update to the coupled model which implements a wave roller model and also a modified method of the radiation stress term based on Mellor (2008, 2011a,b,in press) that includes a vertical distribution which better simulates non-conservative (i.e., wave breaking) processes and appears to be more appropriate for sigma coordinates in very shallow waters where wave breaking conditions dominate. The improvements of the modified model are shown through simulations of several cases that include: (a) obliquely incident spectral waves on a planar beach; (b) obliquely incident spectral waves on a natural barred beach (DUCK'94 experiment); (c) alongshore variable offshore wave forcing on a planar beach; (d) alongshore varying bathymetry with constant offshore wave forcing; and (e) nearshore barred morphology with rip-channels. Quantitative and qualitative comparisons to previous analytical, numerical, laboratory studies and field measurements show that the modified model replicates surf zone recirculation patterns (onshore drift at the surface and undertow at the bottom) more accurately than previous formulations based on radiation stress (Haas and Warner, 2009). The results of the model and test cases are further explored for identifying the forces operating in rip current development and the potential implication for sediment transport and rip channel development. Also, model analysis showed that rip current strength is higher when waves approach at angles of 5?? to 10?? in comparison to normally incident waves. ?? 2011 Elsevier B.V.

  16. Effects of life-state on detectability in a demographic study of the terrestrial orchid Cleistes bifaria

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2003-01-01

    1. Most plant demographic studies follow marked individuals in permanent plots. Plots tend to be small, so detectability is assumed to be one for every individual. However, detectability could be affected by factors such as plant traits, time, space, observer, previous detection, biotic interactions, and especially by life-state. 2. We used a double-observer survey and closed population capture-recapture modelling to estimate state-specific detectability of the orchid Cleistes bifaria in a long-term study plot of 41.2 m2. Based on AICc model selection, detectability was different for each life-state and for tagged vs. previously untagged plants. There were no differences in detectability between the two observers. 3. Detectability estimates (SE) for one-leaf vegetative, two-leaf vegetative, and flowering/fruiting states correlated with mean size of these states and were 0.76 (0.05), 0.92 (0.06), and 1 (0.00), respectively, for previously tagged plants, and 0.84 (0.08), 0.75 (0.22), and 0 (0.00), respectively, for previously untagged plants. (We had insufficient data to obtain a satisfactory estimate of previously untagged flowering plants). 4. Our estimates are for a medium-sized plant in a small and intensively surveyed plot. It is possible that detectability is even lower for larger plots and smaller plants or smaller life-states (e.g. seedlings) and that detectabilities < 1 are widespread in plant demographic studies. 5. State-dependent detectabilities are especially worrying since they will lead to a size- or state-biased sample from the study plot. Failure to incorporate detectability into demographic estimation methods introduces a bias into most estimates of population parameters such as fecundity, recruitment, mortality, and transition rates between life-states. We illustrate this by a simple example using a matrix model, where a hypothetical population was stable but, due to imperfect detection, wrongly projected to be declining at a rate of 8% per year. 6. Almost all plant demographic studies are based on models for discrete states. State and size are important predictors both for demographic rates and detectability. We suggest that even in studies based on small plots, state- or size-specific detectability should be estimated at least at some point to avoid biased inference about the dynamics of the population sampled.

  17. An Example-Based Brain MRI Simulation Framework.

    PubMed

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L

    2015-02-21

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, Michael L.

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCulloch, M; Polan, D; Feng, M

    Purpose: Previous studies have shown that radiotherapy treatment for liver metastases causes marked liver hypertrophy in areas receiving low dose and atrophy/fibrosis in areas receiving high dose. The purpose of this work is to develop and evaluate a biomechanical model-based dose-response model to describe these liver responses to SBRT. Methods: In this retrospective study, a biomechanical model-based deformable registration algorithm, Morfeus, was expanded to include dose-based boundary conditions. Liver and tumor volumes were contoured on the planning images and CT/MR images three months post-RT and converted to finite element models. A thermal expansion-based relationship correlating the delivered dose and volumemore » response was generated from 22 patients previously treated. This coefficient, combined with the planned dose, was applied as an additional boundary condition to describe the volumetric response of the liver of an additional cohort of metastatic liver patients treated with SBRT. The accuracy of the model was evaluated based on overall volumetric liver comparisons and the target registration error (TRE) using the average deviations in positions of identified vascular bifurcations on each set of registered images, with a target accuracy of the 2.5mm isotropic dose grid (vector dimension 4.3mm). Results: The thermal expansion coefficient models the volumetric change of the liver to within 3%. The accuracy of Morfeus with dose-expansion boundary conditions a TRE of 5.7±2.8mm compared to 11.2±3.7mm using rigid registration and 8.9±0.28mm using Morfeus with only spatial boundary conditions. Conclusion: A biomechanical model has been developed to describe the volumetric and spatial response of the liver to SBRT. This work will enable the improvement of correlating functional imaging with delivered dose, the mapping of the delivered dose from one treatment onto the planning images for a subsequent treatment, and will further provide information to assist with the biological characterization of patients’ response to radiation.« less

  20. Hadron-nucleus interactions at high energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiu, C.B.; He, Z.; Tow, D.M.

    1982-06-01

    A simple space-time description of high-energy hadron-nucleus interactions is presented. The model is based on the DTU (dual topologial unitarization)-parton-model description of soft multiparticle production in hadron-hadron interactions. The essentially parameter-free model agrees well with the general features of high-energy data for hadron-nucleus interactions; in particular, this DTU-parton model has a natural explanation for an approximate nu-bar universality. The expansion to high-energy nucleus-nucleus interactions is presented. We also compare and contrast this model with several previously proposed models.

  1. Hadron-nucleus interactions at high energies

    NASA Astrophysics Data System (ADS)

    Chiu, Charles B.; He, Zuoxiu; Tow, Don M.

    1982-06-01

    A simple space-time description of high-energy hadron-nucleus interactions is presented. The model is based on the DTU (dual topological unitarization) -parton-model description of soft multiparticle production in hadron-hadron interactions. The essentially parameter-free model agrees well with the general features of high-energy data for hadron-nucleus interactions; in particular, this DTU-parton model has a natural explanation for an approximate ν¯ universality. The extension to high-energy nucleus-nucleus interactions is presented. We also compare and contrast this model with several previously proposed models.

  2. A verification and errors analysis of the model for object positioning based on binocular stereo vision for airport surface surveillance

    NASA Astrophysics Data System (ADS)

    Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun

    2014-12-01

    A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).

  3. Virtual-optical information security system based on public key infrastructure

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Zhang, Peng; Cai, Lilong; Niu, Hanben

    2005-01-01

    A virtual-optical based encryption model with the aid of public key infrastructure (PKI) is presented in this paper. The proposed model employs a hybrid architecture in which our previously published encryption method based on virtual-optics scheme (VOS) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). The whole information security model is run under the framework of international standard ITU-T X.509 PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOS security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network. Numerical experiments prove the effectiveness of the method. The security of proposed model is briefly analyzed by examining some possible attacks from the viewpoint of a cryptanalysis.

  4. Developing an Educational Computer Game for Migratory Bird Identification Based on a Two-Tier Test Approach

    ERIC Educational Resources Information Center

    Chu, Hui-Chun; Chang, Shao-Chen

    2014-01-01

    Although educational computer games have been recognized as being a promising approach, previous studies have indicated that, without supportive models, students might only show temporary interest during the game-based learning process, and their learning performance is often not as good as expected. Therefore, in this paper, a two-tier test…

  5. Can You Skype Me Now? Developing Teachers' Classroom Management Practices through Virtual Coaching

    ERIC Educational Resources Information Center

    Rock, Marcia L.; Schoenfeld, Naomi; Zigmond, Naomi; Gable, Robert A.; Gregg, Madeleine; Ploessl, Donna M.; Salter, Ashley

    2013-01-01

    In this article, situated within the context of a larger ongoing study on the efficacy of Web-based virtual coaching, these authors describe a virtual coaching model for maximizing pre- and in-service teachers' effective use of evidence-based classroom management practices. They also provide a brief summary of previous results obtained…

  6. Applying Corpus-Based Findings to Form-Focused Instruction: The Case of Reported Speech

    ERIC Educational Resources Information Center

    Barbieri, Federica; Eckhardt, Suzanne E. B.

    2007-01-01

    Arguing that the introduction of corpus linguistics in teaching materials and the language classroom should be informed by theories and principles of SLA, this paper presents a case study illustrating how corpus-based findings on reported speech can be integrated into a form-focused model of instruction. After overviewing previous work which…

  7. Desirable properties of wood for sustainable development in the twenty-first century

    Treesearch

    Kenneth E. Skog; Theodore H. Wegner; Ted Bilek; Charles H. Michler

    2015-01-01

    We previously identified desirable properties for wood based on current market-based trends for commercial uses (Wegner et al. 2010). World business models increasingly incorporate the concept of social responsibility and the tenets of sustainable development. Sustainable development is needed to support an estimated 9 billion people by 2050 within the carrying...

  8. Characterizing the Use of Research-Community Partnerships in Studies of Evidence-Based Interventions in Children's Community Services

    ERIC Educational Resources Information Center

    Frazee-Brookman, Lauren; Stahmer, Aubyn; Stadnick, Nicole; Chlebowski, Colby; Herschel, Amy; Garland, Ann F.

    2015-01-01

    This study characterized the use of research community partnerships (RCPs) to tailor evidence-based intervention, training, and implementation models for delivery across different childhood problems and service contexts using a survey completed by project principal investigators and community partners. To build on previous RCP research and to…

  9. A Randomized Controlled Trial Validating the Impact of the LASER Model of Science Education on Student Achievement and Teacher Instruction

    ERIC Educational Resources Information Center

    Kaldon, Carolyn R.; Zoblotsky, Todd A.

    2014-01-01

    Previous research has linked inquiry-based science instruction (i.e., science instruction that engages students in doing science rather than just learning about science) with greater gains in student learning than text-book based methods (Vanosdall, Klentschy, Hedges & Weisbaum, 2007; Banilower, 2007; Ferguson 2009; Bredderman, 1983;…

  10. Modeling of thermal storage systems in MILP distributed energy resource models

    DOE PAGES

    Steen, David; Stadler, Michael; Cardoso, Gonçalo; ...

    2014-08-04

    Thermal energy storage (TES) and distributed generation technologies, such as combined heat and power (CHP) or photovoltaics (PV), can be used to reduce energy costs and decrease CO 2 emissions from buildings by shifting energy consumption to times with less emissions and/or lower energy prices. To determine the feasibility of investing in TES in combination with other distributed energy resources (DER), mixed integer linear programming (MILP) can be used. Such a MILP model is the well-established Distributed Energy Resources Customer Adoption Model (DER-CAM); however, it currently uses only a simplified TES model to guarantee linearity and short run-times. Loss calculationsmore » are based only on the energy contained in the storage. This paper presents a new DER-CAM TES model that allows improved tracking of losses based on ambient and storage temperatures, and compares results with the previous version. A multi-layer TES model is introduced that retains linearity and avoids creating an endogenous optimization problem. The improved model increases the accuracy of the estimated storage losses and enables use of heat pumps for low temperature storage charging. Ultimately,results indicate that the previous model overestimates the attractiveness of TES investments for cases without possibility to invest in heat pumps and underestimates it for some locations when heat pumps are allowed. Despite a variation in optimal technology selection between the two models, the objective function value stays quite stable, illustrating the complexity of optimal DER sizing problems in buildings and microgrids.« less

  11. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex

    PubMed Central

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were “non-task-specific” (NS) neurons that served as noise generators to “task-specific” neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors. PMID:27536235

  12. Simulating floods in the Amazon River Basin: Impacts of new river geomorphic and dynamic flow parameterizations

    NASA Astrophysics Data System (ADS)

    Coe, M. T.; Costa, M. H.; Howard, E. A.

    2006-12-01

    In this paper we analyze the hydrology of the Amazon River system for the latter half of the 20th century with our recently completed model of terrestrial hydrology (Terrestrial Hydrology Model with Biogeochemistry, THMB). We evaluate the simulated hydrology of the Central Amazon basin against limited observations of river discharge, floodplain inundation, and water height and analyze the spatial and temporal variability of the hydrology for the period 1939-1998. We compare the simulated discharge and floodplain inundated area to the simulations by Coe et al., 2002 using a previous version of this model. The new model simulates the discharge and flooded area in better agreement with the observations than the previous model. The coefficient of correlation between the simulated and observed discharge for the greater than 27000 monthly observations of discharge at 120 sites throughout the Brazilian Amazon is 0.9874 compared to 0.9744 for the previous model. The coefficient of correlation between the simulated monthly flooded area and the satellite-based estimates by Sippel et al., 1998 exceeds 0.7 for 8 of the 12 mainstem reaches. The seasonal and inter-annual variability of the water height and the river slope compares favorably to the satellite altimetric measurements of height reported by Birkett et al., 2002.

  13. Mechanistic model to predict colostrum intake based on deuterium oxide dilution technique data and impact of gestation and prefarrowing diets on piglet intake and sow yield of colostrum.

    PubMed

    Theil, P K; Flummer, C; Hurley, W L; Kristensen, N B; Labouriau, R L; Sørensen, M T

    2014-12-01

    The aims of the present study were to quantify colostrum intake (CI) of piglets using the D2O dilution technique, to develop a mechanistic model to predict CI, to compare these data with CI predicted by a previous empirical predictive model developed for bottle-fed piglets, and to study how composition of diets fed to gestating sows affected piglet CI, sow colostrum yield (CY), and colostrum composition. In total, 240 piglets from 40 litters were enriched with D2O. The CI measured by D2O from birth until 24 h after the birth of first-born piglet was on average 443 g (SD 151). Based on measured CI, a mechanistic model to predict CI was developed using piglet characteristics (24-h weight gain [WG; g], BW at birth [BWB; kg], and duration of CI [D; min]: CI, g=-106+2.26 WG+200 BWB+0.111 D-1,414 WG/D+0.0182 WG/BWB (R2=0.944). This model was used to predict the CI for all colostrum suckling piglets within the 40 litters (n=500, mean=437 g, SD=153 g) and was compared with the CI predicted by a previous empirical predictive model (mean=305 g, SD=140 g). The previous empirical model underestimated the CI by 30% compared with that obtained by the new mechanistic model. The sows were fed 1 of 4 gestation diets (n=10 per diet) based on different fiber sources (low fiber [17%] or potato pulp, pectin residue, or sugarbeet pulp [32 to 40%]) from mating until d 108 of gestation. From d 108 of gestation until parturition, sows were fed 1 of 5 prefarrowing diets (n=8 per diet) varying in supplemented fat (3% animal fat, 8% coconut oil, 8% sunflower oil, 8% fish oil, or 4% fish oil+4% octanoic acid). Sows fed diets with pectin residue or sugarbeet pulp during gestation produced colostrum with lower protein, fat, DM, and energy concentrations and higher lactose concentrations, and their piglets had greater CI as compared with sows fed potato pulp or the low-fiber diet (P<0.05), and sows fed pectin residue had a greater CY than potato pulp-fed sows (P<0.05). Prefarrowing diets affected neither CI nor CY, but the prefarrowing diet with coconut oil decreased lactose and increased DM concentrations of colostrum compared with other prefarrowing diets (P<0.05). In conclusion, the new mechanistic predictive model for CI suggests that the previous empirical predictive model underestimates CI of sow-reared piglets by 30%. It was also concluded that nutrition of sows during gestation affected CY and colostrum composition.

  14. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  15. Force Modelling in Orthogonal Cutting Considering Flank Wear Effect

    NASA Astrophysics Data System (ADS)

    Rathod, Kanti Bhikhubhai; Lalwani, Devdas I.

    2017-05-01

    In the present work, an attempt has been made to provide a predictive cutting force model during orthogonal cutting by combining two different force models, that is, a force model for a perfectly sharp tool plus considering the effect of edge radius and a force model for a worn tool. The first force model is for a perfectly sharp tool that is based on Oxley's predictive machining theory for orthogonal cutting as the Oxley's model is for perfectly sharp tool, the effect of cutting edge radius (hone radius) is added and improve model is presented. The second force model is based on worn tool (flank wear) that was proposed by Waldorf. Further, the developed combined force model is also used to predict flank wear width using inverse approach. The performance of the developed combined total force model is compared with the previously published results for AISI 1045 and AISI 4142 materials and found reasonably good agreement.

  16. Nonlinear calculations of the time evolution of black hole accretion disks

    NASA Technical Reports Server (NTRS)

    Luo, C.

    1994-01-01

    Based on previous works on black hole accretion disks, I continue to explore the disk dynamics using the finite difference method to solve the highly nonlinear problem of time-dependent alpha disk equations. Here a radially zoned model is used to develop a computational scheme in order to accommodate functional dependence of the viscosity parameter alpha on the disk scale height and/or surface density. This work is based on the author's previous work on the steady disk structure and the linear analysis of disk dynamics to try to apply to x-ray emissions from black candidates (i.e., multiple-state spectra, instabilities, QPO's, etc.).

  17. RESIDUAL RISK ASSESSMENTS - RESIDUAL RISK ...

    EPA Pesticide Factsheets

    This source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation for Coke Ovens. These assesments utilize existing models and data bases to examine the multi-media and multi-pollutant impacts of air toxics emissions on human health and the environment. Details on the assessment process and methodologies can be found in EPA's Residual Risk Report to Congress issued in March of 1999 (see web site). To assess the health risks imposed by air toxics emissions from Coke Ovens to determine if control technology standards previously established are adequately protecting public health.

  18. RESEARCH: An Ecoregional Approach to the Economic Valuation of Land- and Water-Based Recreation in the United States

    PubMed

    Bhat; Bergstrom; Teasley; Bowker; Cordell

    1998-01-01

    / This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such as motor boating and waterskiing, developed and primitive camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecoregion. While our ecoregional approach differs conceptually from previous work, our results appear consistent with the previous travel cost method valuation studies.KEY WORDS: Recreation; Ecoregion; Travel cost method; Truncated Poisson model

  19. Previous experience in manned space flight: A survey of human factors lessons learned

    NASA Technical Reports Server (NTRS)

    Chandlee, George O.; Woolford, Barbara

    1993-01-01

    Previous experience in manned space flight programs can be used to compile a data base of human factors lessons learned for the purpose of developing aids in the future design of inhabited spacecraft. The objectives are to gather information available from relevant sources, to develop a taxonomy of human factors data, and to produce a data base that can be used in the future for those people involved in the design of manned spacecraft operations. A study is currently underway at the Johnson Space Center with the objective of compiling, classifying, and summarizing relevant human factors data bearing on the lessons learned from previous manned space flights. The research reported defines sources of data, methods for collection, and proposes a classification for human factors data that may be a model for other human factors disciplines.

  20. Development of the global sea ice 6.0 CICE configuration for the Met Office global coupled model

    DOE PAGES

    Rae, J. G. L.; Hewitt, H. T.; Keen, A. B.; ...

    2015-07-24

    The new sea ice configuration GSI6.0, used in the Met Office global coupled configuration GC2.0, is described and the sea ice extent, thickness and volume are compared with the previous configuration and with observationally based data sets. In the Arctic, the sea ice is thicker in all seasons than in the previous configuration, and there is now better agreement of the modelled concentration and extent with the HadISST data set. As a result, in the Antarctic, a warm bias in the ocean model has been exacerbated at the higher resolution of GC2.0, leading to a large reduction in ice extentmore » and volume; further work is required to rectify this in future configurations.« less

  1. Development of the global sea ice 6.0 CICE configuration for the Met Office global coupled model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rae, J. G. L.; Hewitt, H. T.; Keen, A. B.

    The new sea ice configuration GSI6.0, used in the Met Office global coupled configuration GC2.0, is described and the sea ice extent, thickness and volume are compared with the previous configuration and with observationally based data sets. In the Arctic, the sea ice is thicker in all seasons than in the previous configuration, and there is now better agreement of the modelled concentration and extent with the HadISST data set. As a result, in the Antarctic, a warm bias in the ocean model has been exacerbated at the higher resolution of GC2.0, leading to a large reduction in ice extentmore » and volume; further work is required to rectify this in future configurations.« less

  2. Models and theories of prescribing decisions: A review and suggested a new model.

    PubMed

    Murshid, Mohsen Ali; Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the 'persuasion theory - elaboration likelihood model', the stimuli-response marketing model', the 'agency theory', the theory of planned behaviour,' and 'social power theory,' in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research.

  3. Petri net modeling of high-order genetic systems using grammatical evolution.

    PubMed

    Moore, Jason H; Hahn, Lance W

    2003-11-01

    Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two DNA sequence variations. In the present study, we evaluate whether the Petri net approach is capable of identifying biochemical networks that are consistent with disease susceptibility due to higher order nonlinear interactions between three DNA sequence variations. The results indicate that our model-building approach is capable of routinely identifying good, but not perfect, Petri net models. Ideas for improving the algorithm for this high-dimensional problem are presented.

  4. Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Harris, S.

    DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.

  5. A digital waveguide-based approach for Clavinet modeling and synthesis

    NASA Astrophysics Data System (ADS)

    Gabrielli, Leonardo; Välimäki, Vesa; Penttinen, Henri; Squartini, Stefano; Bilbao, Stefan

    2013-12-01

    The Clavinet is an electromechanical musical instrument produced in the mid-twentieth century. As is the case for other vintage instruments, it is subject to aging and requires great effort to be maintained or restored. This paper reports analyses conducted on a Hohner Clavinet D6 and proposes a computational model to faithfully reproduce the Clavinet sound in real time, from tone generation to the emulation of the electronic components. The string excitation signal model is physically inspired and represents a cheap solution in terms of both computational resources and especially memory requirements (compared, e.g., to sample playback systems). Pickups and amplifier models have been implemented which enhance the natural character of the sound with respect to previous work. A model has been implemented on a real-time software platform, Pure Data, capable of a 10-voice polyphony with low latency on an embedded device. Finally, subjective listening tests conducted using the current model are compared to previous tests showing slightly improved results.

  6. Transient Mobility on Submonolayer Island Growth: An Exploration of Asymptotic Effects in Modeling

    NASA Astrophysics Data System (ADS)

    Morales-Cifuentes, Josue; Einstein, Theodore L.; Pimpinelli, Alberto

    In studies of epitaxial growth, modeling of the smallest stable cluster (i+1 monomers, with i the critical nucleus size), is paramount in understanding growth dynamics. Our previous work has tackled submonolayer growth by modeling the effect of ballistic monomers, hot-precursors, on diffusive dynamics. Different scaling regimes and energies were predicted, with initial confirmation by applying to para-hexaphenyl submonolayer studies. Lingering questions about the applicability and behavior of the model are addressed. First, we show how an asymptotic approximation based on the growth exponent, α (N Fα) allows for robustness of modeling to experimental data; second, we answer questions about non-monotonicity by exploring the behavior of the growth exponent across realizable parameter spaces; third, we revisit our previous para-hexaphenyl work and examine relevant physical parameters, namely the speed of the hot-monomers. We conclude with an exploration of how the new asymptotic approximation can be used to strengthen the application of our model to other physical systems.

  7. The joint effect of mesoscale and microscale roughness on perceived gloss.

    PubMed

    Qi, Lin; Chantler, Mike J; Siebert, J Paul; Dong, Junyu

    2015-10-01

    Computer simulated stimuli can provide a flexible method for creating artificial scenes in the study of visual perception of material surface properties. Previous work based on this approach reported that the properties of surface roughness and glossiness are mutually interdependent and therefore, perception of one affects the perception of the other. In this case roughness was limited to a surface property termed bumpiness. This paper reports a study into how perceived gloss varies with two model parameters related to surface roughness in computer simulations: the mesoscale roughness parameter in a surface geometry model and the microscale roughness parameter in a surface reflectance model. We used a real-world environment map to provide complex illumination and a physically-based path tracer for rendering the stimuli. Eight observers took part in a 2AFC experiment, and the results were tested against conjoint measurement models. We found that although both of the above roughness parameters significantly affect perceived gloss, the additive model does not adequately describe their mutually interactive and nonlinear influence, which is at variance with previous findings. We investigated five image properties used to quantify specular highlights, and found that perceived gloss is well predicted using a linear model. Our findings provide computational support to the 'statistical appearance models' proposed recently for material perception. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Finite element method (FEM) model of the mechanical stress on phospholipid membranes from shock waves produced in nanosecond electric pulses (nsEP)

    NASA Astrophysics Data System (ADS)

    Barnes, Ronald; Roth, Caleb C.; Shadaram, Mehdi; Beier, Hope; Ibey, Bennett L.

    2015-03-01

    The underlying mechanism(s) responsible for nanoporation of phospholipid membranes by nanosecond pulsed electric fields (nsEP) remains unknown. The passage of a high electric field through a conductive medium creates two primary contributing factors that may induce poration: the electric field interaction at the membrane and the shockwave produced from electrostriction of a polar submersion medium exposed to an electric field. Previous work has focused on the electric field interaction at the cell membrane, through such models as the transport lattice method. Our objective is to model the shock wave cell membrane interaction induced from the density perturbation formed at the rising edge of a high voltage pulse in a polar liquid resulting in a shock wave propagating away from the electrode toward the cell membrane. Utilizing previous data from cell membrane mechanical parameters, and nsEP generated shockwave parameters, an acoustic shock wave model based on the Helmholtz equation for sound pressure was developed and coupled to a cell membrane model with finite-element modeling in COMSOL. The acoustic structure interaction model was developed to illustrate the harmonic membrane displacements and stresses resulting from shockwave and membrane interaction based on Hooke's law. Poration is predicted by utilizing membrane mechanical breakdown parameters including cortical stress limits and hydrostatic pressure gradients.

  9. TSARINA: A Computer Model for Assessing Conventional and Chemical Attacks on Airbases

    DTIC Science & Technology

    1990-09-01

    IV, and has been updated to FORTRAN 77; it has been adapted to various computer systems, as was the widely used AIDA model and the previous versions of...conventional and chemical attacks on sortie generation. In the first version of TSARINA [1 2], several key additions were made to the AIDA model so that (1...various on-base resources, in addition to the estimates of hits and facility damage that are generated by the original AIDA model . The second version

  10. Neuroanatomical basis for recognition primed decision making.

    PubMed

    Hudson, Darren

    2013-01-01

    Effective decision making under time constraints is often overlooked in medical decision making. The recognition primed decision making (RPDM) model was developed by Gary Klein based on previous recognized situations to develop a satisfactory solution to the current problem. Bayes Theorem is the most popular decision making model in medicine but is limited by the need for adequate time to consider all probabilities. Unlike other decision making models, there is a potential neurobiological basis for RPDM. This model has significant implication for health informatics and medical education.

  11. Accurate modeling of the hose instability in plasma wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Mehrling, T. J.; Benedetti, C.; Schroeder, C. B.; Martinez de la Ossa, A.; Osterhoff, J.; Esarey, E.; Leemans, W. P.

    2018-05-01

    Hosing is a major challenge for the applicability of plasma wakefield accelerators and its modeling is therefore of fundamental importance to facilitate future stable and compact plasma-based particle accelerators. In this contribution, we present a new model for the evolution of the plasma centroid, which enables the accurate investigation of the hose instability in the nonlinear blowout regime. It paves the road for more precise and comprehensive studies of hosing, e.g., with drive and witness beams, which were not possible with previous models.

  12. Modeling the mechanical properties of DNA nanostructures.

    PubMed

    Arbona, Jean Michel; Aimé, Jean-Pierre; Elezgaray, Juan

    2012-11-01

    We discuss generalizations of a previously published coarse-grained description [Mergell et al., Phys. Rev. E 68, 021911 (2003)] of double stranded DNA (dsDNA). The model is defined at the base-pair level and includes the electrostatic repulsion between neighbor helices. We show that the model reproduces mechanical and elastic properties of several DNA nanostructures (DNA origamis). We also show that electrostatic interactions are necessary to reproduce atomic force microscopy measurements on planar DNA origamis.

  13. Ultrafiltration membrane reactors for enzymatic resolution of amino acids: design model and optimization.

    PubMed

    Bódalo, A; Gómez, J L.; Gómez, E; Bastida, J; Máximo, M F.; Montiel, M C.

    2001-03-08

    In this paper the possibility of continuous resolution of DL-phenylalanine, catalyzed by L-aminoacylase in a ultrafiltration membrane reactor (UFMR) is presented. A simple design model, based on previous kinetic studies, has been demonstrated to be capable of describing the behavior of the experimental system. The model has been used to determine the optimal experimental conditions to carry out the asymmetrical hydrolysis of N-acetyl-DL-phenylalanine.

  14. The Influence of Atmosphere-Ocean Interaction on MJO Development and Propagation

    DTIC Science & Technology

    2014-09-30

    evaluate modeling results and process studies. The field phase of this project is associated with DYNAMO , which is the US contribution to the...influence on ocean temperature 4. Extended run for DYNAMO with high vertical resolution NCOM RESULTS Summary of project results The work funded...model experiments of the November 2011 MJO – the strongest MJO episode observed during the DYNAMO . The previous conceptual model that was based on TOGA

  15. Estimation of Dynamic Friction Process of the Akatani Landslide Based on the Waveform Inversion and Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mangeney, A.; Moretti, L.; Matsushi, Y.

    2014-12-01

    Understanding physical parameters, such as frictional coefficients, velocity change, and dynamic history, is important issue for assessing and managing the risks posed by deep-seated catastrophic landslides. Previously, landslide motion has been inferred qualitatively from topographic changes caused by the event, and occasionally from eyewitness reports. However, these conventional approaches are unable to evaluate source processes and dynamic parameters. In this study, we use broadband seismic recordings to trace the dynamic process of the deep-seated Akatani landslide that occurred on the Kii Peninsula, Japan, which is one of the best recorded large slope failures. Based on the previous results of waveform inversions and precise topographic surveys done before and after the event, we applied numerical simulations using the SHALTOP numerical model (Mangeney et al., 2007). This model describes homogeneous continuous granular flows on a 3D topography based on a depth averaged thin layer approximation. We assume a Coulomb's friction law with a constant friction coefficient, i. e. the friction is independent of the sliding velocity. We varied the friction coefficients in the simulation so that the resulting force acting on the surface agrees with the single force estimated from the seismic waveform inversion. Figure shows the force history of the east-west components after the band-pass filtering between 10-100 seconds. The force history of the simulation with frictional coefficient 0.27 (thin red line) the best agrees with the result of seismic waveform inversion (thick gray line). Although the amplitude is slightly different, phases are coherent for the main three pulses. This is an evidence that the point-source approximation works reasonably well for this particular event. The friction coefficient during the sliding was estimated to be 0.38 based on the seismic waveform inversion performed by the previous study and on the sliding block model (Yamada et al., 2013), whereas the frictional coefficient estimated from the numerical simulation was about 0.27. This discrepancy may be due to the digital elevation model, to the other forces such as pressure gradients and centrifugal acceleration included in the model. However, quantitative interpretation of this difference requires further investigation.

  16. Use of A-Train Aerosol Observations to Constrain Direct Aerosol Radiative Effects (DARE) Comparisons with Aerocom Models and Uncertainty Assessments

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.

    2017-01-01

    We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the MODIS Collection 6 AOD data derived with the dark target and deep blue algorithms has extended the coverage of the MOC retrievals towards higher latitudes. The MOC aerosol retrievals agree better with AERONET in terms of the single scattering albedo (ssa) at 441 nm than ssa calculated from OMI and MODIS data alone, indicating that CALIOP aerosol backscatter data contains information on aerosol absorption. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Overall, the MOC-based calculations of clear-sky DARE at TOA over land are smaller (less negative) than previous model or observational estimates due to the inclusion of more absorbing aerosol retrievals over brighter surfaces, not previously available for observationally-based estimates of DARE. MOC-based DARE estimates at the surface over land and total (land and ocean) DARE estimates at TOA are in between previous model and observational results. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3. We discuss sampling issues that affect the comparisons and the major challenges in extending our clear-sky DARE results to all-sky conditions. We present estimates of clear-sky and all-sky DARE and show uncertainties that stem from the assumptions in the spatial extrapolation and accuracy of aerosol and cloud properties, in the diurnal evolution of these properties, and in the radiative transfer calculations.

  17. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  18. Integrating Adaptability into Special Operations Forces Intermediate Level Education

    DTIC Science & Technology

    2010-10-01

    This model is based on the Experiential Learning Theory (ELT), which states that learning occurs by the transfer of experience into knowledge ( Kolb ...Report 529. Arlington, VA. Kolb , D.A., Boyatzis, R.E., & Mainemelis, C. (2000). Experiential Learning Theory : Previous research and new dimensions. In...adaptive thinking materials. Integrating this information will provide some continuity among concepts for instruction. Experiential Learning Model

  19. Factors that Influence the Perceived Advantages and Relevance of Facebook as a Learning Tool: An Extension of the UTAUT

    ERIC Educational Resources Information Center

    Escobar-Rodríguez, Tomás; Carvajal-Trujillo, Elena; Monge-Lozano, Pedro

    2014-01-01

    Social media technologies are becoming a fundamental component of education. This study extends the Unified Theory of Acceptance and Use of Technology (UTAUT) to identify factors that influence the perceived advantages and relevance of Facebook as a learning tool. The proposed model is based on previous models of UTAUT. Constructs from previous…

  20. Comparing models of the combined-stimulation advantage for speech recognition.

    PubMed

    Micheyl, Christophe; Oxenham, Andrew J

    2012-05-01

    The "combined-stimulation advantage" refers to an improvement in speech recognition when cochlear-implant or vocoded stimulation is supplemented by low-frequency acoustic information. Previous studies have been interpreted as evidence for "super-additive" or "synergistic" effects in the combination of low-frequency and electric or vocoded speech information by human listeners. However, this conclusion was based on predictions of performance obtained using a suboptimal high-threshold model of information combination. The present study shows that a different model, based on Gaussian signal detection theory, can predict surprisingly large combined-stimulation advantages, even when performance with either information source alone is close to chance, without involving any synergistic interaction. A reanalysis of published data using this model reveals that previous results, which have been interpreted as evidence for super-additive effects in perception of combined speech stimuli, are actually consistent with a more parsimonious explanation, according to which the combined-stimulation advantage reflects an optimal combination of two independent sources of information. The present results do not rule out the possible existence of synergistic effects in combined stimulation; however, they emphasize the possibility that the combined-stimulation advantages observed in some studies can be explained simply by non-interactive combination of two information sources.

  1. Validation of a predictive model that identifies patients at high risk of developing febrile neutropaenia following chemotherapy for breast cancer.

    PubMed

    Jenkins, P; Scaife, J; Freeman, S

    2012-07-01

    We have previously developed a predictive model that identifies patients at increased risk of febrile neutropaenia (FN) following chemotherapy, based on pretreatment haematological indices. This study was designed to validate our earlier findings in a separate cohort of patients undergoing more myelosuppressive chemotherapy supported by growth factors. We conducted a retrospective analysis of 263 patients who had been treated with adjuvant docetaxel, adriamycin and cyclophosphamide (TAC) chemotherapy for breast cancer. All patients received prophylactic pegfilgrastim and the majority also received prophylactic antibiotics. Thirty-one patients (12%) developed FN. Using our previous model, patients in the highest risk group (pretreatment absolute neutrophil count≤3.1 10(9)/l and absolute lymphocyte count≤1.5 10(9)/l) comprised 8% of the total population and had a 33% risk of developing FN. Compared with the rest of the cohort, this group had a 3.4-fold increased risk of developing FN (P=0.001) and a 5.2-fold increased risk of cycle 1 FN (P<0.001). A simple model based on pretreatment differential white blood cell count can be applied to pegfilgrastim-supported patients to identify those who are at higher risk of FN.

  2. A Cyclic-Plasticity-Based Mechanistic Approach for Fatigue Evaluation of 316 Stainless Steel Under Arbitrary Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.

    In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less

  3. A Cyclic-Plasticity-Based Mechanistic Approach for Fatigue Evaluation of 316 Stainless Steel Under Arbitrary Loading

    DOE PAGES

    Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.; ...

    2017-12-05

    In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less

  4. On the usefulness of gradient information in multi-objective deformable image registration using a B-spline-based dual-dynamic transformation model: comparison of three optimization algorithms

    NASA Astrophysics Data System (ADS)

    Pirpinia, Kleopatra; Bosman, Peter A. N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja

    2015-03-01

    The use of gradient information is well-known to be highly useful in single-objective optimization-based image registration methods. However, its usefulness has not yet been investigated for deformable image registration from a multi-objective optimization perspective. To this end, within a previously introduced multi-objective optimization framework, we use a smooth B-spline-based dual-dynamic transformation model that allows us to derive gradient information analytically, while still being able to account for large deformations. Within the multi-objective framework, we previously employed a powerful evolutionary algorithm (EA) that computes and advances multiple outcomes at once, resulting in a set of solutions (a so-called Pareto front) that represents efficient trade-offs between the objectives. With the addition of the B-spline-based transformation model, we studied the usefulness of gradient information in multiobjective deformable image registration using three different optimization algorithms: the (gradient-less) EA, a gradientonly algorithm, and a hybridization of these two. We evaluated the algorithms to register highly deformed images: 2D MRI slices of the breast in prone and supine positions. Results demonstrate that gradient-based multi-objective optimization significantly speeds up optimization in the initial stages of optimization. However, allowing sufficient computational resources, better results could still be obtained with the EA. Ultimately, the hybrid EA found the best overall approximation of the optimal Pareto front, further indicating that adding gradient-based optimization for multiobjective optimization-based deformable image registration can indeed be beneficial

  5. A Bayesian model averaging method for improving SMT phrase table

    NASA Astrophysics Data System (ADS)

    Duan, Nan

    2013-03-01

    Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.

  6. Mathematical modeling of ethanol production in solid-state fermentation based on solid medium' dry weight variation.

    PubMed

    Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad

    2018-04-21

    In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.

  7. Nitrogen feedbacks increase future terrestrial ecosystem carbon uptake in an individual-based dynamic vegetation model

    NASA Astrophysics Data System (ADS)

    Wårlind, D.; Smith, B.; Hickler, T.; Arneth, A.

    2014-11-01

    Recently a considerable amount of effort has been put into quantifying how interactions of the carbon and nitrogen cycle affect future terrestrial carbon sinks. Dynamic vegetation models, representing the nitrogen cycle with varying degree of complexity, have shown diverging constraints of nitrogen dynamics on future carbon sequestration. In this study, we use LPJ-GUESS, a dynamic vegetation model employing a detailed individual- and patch-based representation of vegetation dynamics, to evaluate how population dynamics and resource competition between plant functional types, combined with nitrogen dynamics, have influenced the terrestrial carbon storage in the past and to investigate how terrestrial carbon and nitrogen dynamics might change in the future (1850 to 2100; one representative "business-as-usual" climate scenario). Single-factor model experiments of CO2 fertilisation and climate change show generally similar directions of the responses of C-N interactions, compared to the C-only version of the model as documented in previous studies using other global models. Under an RCP 8.5 scenario, nitrogen limitation suppresses potential CO2 fertilisation, reducing the cumulative net ecosystem carbon uptake between 1850 and 2100 by 61%, and soil warming-induced increase in nitrogen mineralisation reduces terrestrial carbon loss by 31%. When environmental changes are considered conjointly, carbon sequestration is limited by nitrogen dynamics up to the present. However, during the 21st century, nitrogen dynamics induce a net increase in carbon sequestration, resulting in an overall larger carbon uptake of 17% over the full period. This contrasts with previous results with other global models that have shown an 8 to 37% decrease in carbon uptake relative to modern baseline conditions. Implications for the plausibility of earlier projections of future terrestrial C dynamics based on C-only models are discussed.

  8. Deformable complex network for refining low-resolution X-ray structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu

    2015-10-27

    A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less

  9. Adaptive two-regime method: Application to front propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Martin, E-mail: martin.robinson@maths.ox.ac.uk; Erban, Radek, E-mail: erban@maths.ox.ac.uk; Flegg, Mark, E-mail: mark.flegg@monash.edu

    2014-03-28

    The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in termsmore » of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.« less

  10. Mathematical modeling improves EC50 estimations from classical dose-response curves.

    PubMed

    Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar

    2015-03-01

    The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.

  11. Community-aware task allocation for social networked multiagent systems.

    PubMed

    Wang, Wanyuan; Jiang, Yichuan

    2014-09-01

    In this paper, we propose a novel community-aware task allocation model for social networked multiagent systems (SN-MASs), where the agent' cooperation domain is constrained in community and each agent can negotiate only with its intracommunity member agents. Under such community-aware scenarios, we prove that it remains NP-hard to maximize system overall profit. To solve this problem effectively, we present a heuristic algorithm that is composed of three phases: 1) task selection: select the desirable task to be allocated preferentially; 2) allocation to community: allocate the selected task to communities based on a significant task-first heuristics; and 3) allocation to agent: negotiate resources for the selected task based on a nonoverlap agent-first and breadth-first resource negotiation mechanism. Through the theoretical analyses and experiments, the advantages of our presented heuristic algorithm and community-aware task allocation model are validated. 1) Our presented heuristic algorithm performs very closely to the benchmark exponential brute-force optimal algorithm and the network flow-based greedy algorithm in terms of system overall profit in small-scale applications. Moreover, in the large-scale applications, the presented heuristic algorithm achieves approximately the same overall system profit, but significantly reduces the computational load compared with the greedy algorithm. 2) Our presented community-aware task allocation model reduces the system communication cost compared with the previous global-aware task allocation model and improves the system overall profit greatly compared with the previous local neighbor-aware task allocation model.

  12. Investigation of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian A.

    2005-01-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical model. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. Excellent agreement is achieved between the predicted and measured results, thereby quantitatively validating the numerical tool.

  13. The stability of gadolinium-based contrast agents in human serum: A reanalysis of literature data and association with clinical outcomes.

    PubMed

    Prybylski, John P; Semelka, Richard C; Jay, Michael

    2017-05-01

    To reanalyze literature data of gadolinium (Gd)-based contrast agents (GBCAs) in plasma with a kinetic model of dissociation to provide a comprehensive assessment of equilibrium conditions for linear GBCAs. Data for the release of Gd from GBCAs in human serum was extracted from a previous report in the literature and fit to a kinetic dissociation/association model. The conditional stabilities (logK cond ) and percent intact over time were calculated using the model rate constants. The correlations between clinical outcomes and logK cond or other stability indices were determined. The release curves for Omniscan®, gadodiamide, OptiMARK®, gadoversetamide Magnevist® and Multihance® were extracted and all fit well to the kinetic model. The logK cond s calculated from the rate constants were on the order of ~4-6, and were not significantly altered by excess ligand or phosphate. The stability constant based on the amount intact by the initial elimination half-life of GBCAs in plasma provided good correlation with outcomes observed in patients. Estimation of the kinetic constants for GBCA dissociation/association revealed that their stability in physiological fluid is much lower than previous approaches would suggest, which correlates well with deposition and pharmacokinetic observations of GBCAs in human patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Analysis of Dynamic Fracture Compliance Based on Poroelastic Theory - Part II: Results of Numerical and Experimental Tests

    NASA Astrophysics Data System (ADS)

    Wang, Ding; Ding, Pin-bo; Ba, Jing

    2018-03-01

    In Part I, a dynamic fracture compliance model (DFCM) was derived based on the poroelastic theory. The normal compliance of fractures is frequency-dependent and closely associated with the connectivity of porous media. In this paper, we first compare the DFCM with previous fractured media theories in the literature in a full frequency range. Furthermore, experimental tests are performed on synthetic rock specimens, and the DFCM is compared with the experimental data in the ultrasonic frequency band. Synthetic rock specimens saturated with water have more realistic mineral compositions and pore structures relative to previous works in comparison with natural reservoir rocks. The fracture/pore geometrical and physical parameters can be controlled to replicate approximately those of natural rocks. P- and S-wave anisotropy characteristics with different fracture and pore properties are calculated and numerical results are compared with experimental data. Although the measurement frequency is relatively high, the results of DFCM are appropriate for explaining the experimental data. The characteristic frequency of fluid pressure equilibration calculated based on the specimen parameters is not substantially less than the measurement frequency. In the dynamic fracture model, the wave-induced fluid flow behavior is an important factor for the fracture-wave interaction process, which differs from the models at the high-frequency limits, for instance, Hudson's un-relaxed model.

  15. Model-based registration of multi-rigid-body for augmented reality

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro

    2009-02-01

    Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.

  16. A new physically-based model considered antecedent rainfall for shallow landslide

    NASA Astrophysics Data System (ADS)

    Luo, Yu; He, Siming

    2017-04-01

    Rainfall is the most significant factor to cause landslide especially shallow landslide. In previous studies, rainfall intensity and duration are take part in the physically based model to determining the occurrence of the rainfall-induced landslides, but seldom considered the antecedent rainfall. In this study, antecedent rainfall is took into account to derive a new physically based model for shallow landslides prone area predicting at the basin scale. Based on the Rosso's equation of seepage flow considering the antecedent rainfall to construct the hillslope hydrology model. And then, the infinite slope stability theory is using to construct the slope stability model. At last, the model is apply in the Baisha river basin of Chengdu, Sichuan, China, and the results are compared with the one's from unconsidered antecedent rainfall. The results show that the model is simple, but has the capability of consider antecedent rainfall in the triggering mechanism of shallow landslide. Meanwhile, antecedent rainfall can make an obvious effect on shallow landslides, so in shallow landslide hazard assessment, the influence of the antecedent rainfall can't be ignored.

  17. GEO Collisional Risk Assessment Based on Analysis of NASA-WISE Data and Modeling

    NASA Astrophysics Data System (ADS)

    Howard, S.; Murray-Krezan, J.; Dao, P.; Surka, D.

    From December 2009 thru 2011 the NASA Wide-Field Infrared Survey Explorer (WISE) gathered radiometrically exquisite measurements of debris in near Earth orbits, substantially augmenting the current catalog of known debris. The WISE GEO-belt debris population adds approximately 2,000 previously uncataloged objects. This paper describes characterization of the WISE GEO-belt orbital debris population in terms of location, epoch, and size. The WISE GEO-belt debris population characteristics are compared with the publically available U.S. catalog and previous descriptions of the GEO-belt debris population. We found that our results differ from previously published debris distributions, suggesting the need for updates to collision probability models and a better measurement-based understanding of the debris population. Previous studies of collisional rate in GEO invoke the presence of a large number of debris in the regime of sizes too small to track, i.e. not in the catalog, but large enough to cause significant damage and fragmentation in a collision. A common approach is to estimate that population of small debris by assuming that it is dominated by fragments and therefore should follow trends observed in fragmentation events or laboratory fragmentation tests. In other words, the population of debris can be extrapolated from trackable sizes to small sizes using an empirically determined trend of population as a function of size. We use new information suggested by the analysis of WISE IR measurements to propose an updated relationship. Our trend is an improvement because we expect that an IR emissive signature is a more reliable indicator of physical size. Based on the revised relationship, we re-estimate the total collisional rate in the GEO belt with the inclusion of projected uncatalogued debris and applying a conjunction assessment technique. Through modeling, we evaluate the hot spots near the geopotential wells and the effects of fragmentation in the GEO graveyard to the collision with GEO objects.

  18. A Linearized Model for Flicker and Contrast Thresholds at Various Retinal Illuminances

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Watson, Andrew

    2015-01-01

    We previously proposed a flicker visibility metric for bright displays, based on psychophysical data collected at a high mean luminance. Here we extend the metric to other mean luminances. This extension relies on a linear relation between log sensitivity and critical fusion frequency, and a linear relation between critical fusion frequency and log retina lilluminance. Consistent with our previous metric, the extended flicker visibility metric is measured in just-noticeable differences (JNDs).

  19. Intervertebral disc biomechanical analysis using the finite element modeling based on medical images.

    PubMed

    Li, Haiyun; Wang, Zheng

    2006-01-01

    In this paper, a 3D geometric model of the intervertebral and lumbar disks has been presented, which integrated the spine CT and MRI data-based anatomical structure. Based on the geometric model, a 3D finite element model of an L1-L2 segment was created. Loads, which simulate the pressure from above were applied to the FEM, while a boundary condition describing the relative L1-L2 displacement is imposed on the FEM to account for 3D physiological states. The simulation calculation illustrates the stress and strain distribution and deformation of the spine. The method has two characteristics compared to previous studies: first, the finite element model of the lumbar are based on the data directly derived from medical images such as CTs and MRIs. Second, the result of analysis will be more accurate than using the data of geometric parameters. The FEM provides a promising tool in clinical diagnosis and for optimizing individual therapy in the intervertebral disc herniation.

  20. Modelling and simulation of a pervaporation process using tubular module for production of anhydrous ethanol

    NASA Astrophysics Data System (ADS)

    Hieu, Nguyen Huu

    2017-09-01

    Pervaporation is a potential process for the final step of ethanol biofuel production. In this study, a mathematical model was developed based on the resistance-in-series model and a simulation was carried out using the specialized simulation software COMSOL Multiphysics to describe a tubular type pervaporation module with membranes for the dehydration of ethanol solution. The permeance of membranes, operating conditions, and feed conditions in the simulation were referred from experimental data reported previously in literature. Accordingly, the simulated temperature and density profiles of pure water and ethanol-water mixture were validated based on existing published data.

  1. A refined 'standard' thermal model for asteroids based on observations of 1 Ceres and 2 Pallas

    NASA Technical Reports Server (NTRS)

    Lebofsky, Larry A.; Sykes, Mark V.; Tedesco, Edward F.; Veeder, Glenn J.; Matson, Dennis L.

    1986-01-01

    An analysis of ground-based thermal IR observations of 1 Ceres and 2 Pallas in light of their recently determined occultation diameters and small amplitude light curves has yielded a new value for the IR beaming parameter employed in the standard asteroid thermal emission model which is significantly lower than the previous one. When applied to the reduction of thermal IR observations of other asteroids, this new value is expected to yield model diameters closer to actual values. The present formulation incorporates the IAU magnitude convention for asteroids that employs zero-phase magnitudes, including the opposition effect.

  2. Ink Wash Painting Style Rendering With Physically-based Ink Dispersion Model

    NASA Astrophysics Data System (ADS)

    Wang, Yifan; Li, Weiran; Zhu, Qing

    2018-04-01

    This paper presents a real-time rendering method based on the GPU programmable pipeline for rendering the 3D scene in ink wash painting style. The method is divided into main three parts: First, render the ink properties of 3D model by calculating its vertex curvature. Then, cached the ink properties to a paper structure and using an ink dispersion model which is defined by referencing the theory of porous media to simulate the dispersion of ink. Finally, convert the ink properties to the pixel color information and render it to the screen. This method has a better performance than previous methods in visual quality.

  3. Finite area combustor theoretical rocket performance

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Mcbride, Bonnie J.

    1988-01-01

    Previous to this report, the computer program of NASA SP-273 and NASA TM-86885 was capable of calculating theoretical rocket performance based only on the assumption of an infinite area combustion chamber (IAC). An option was added to this program which now also permits the calculation of rocket performance based on the assumption of a finite area combustion chamber (FAC). In the FAC model, the combustion process in the cylindrical chamber is assumed to be adiabatic, but nonisentropic. This results in a stagnation pressure drop from the injector face to the end of the chamber and a lower calculated performance for the FAC model than the IAC model.

  4. Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.

    PubMed

    Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke

    2018-01-01

    An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.

  5. A spherical harmonics intensity model for 3D segmentation and 3D shape analysis of heterochromatin foci.

    PubMed

    Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl

    2016-08-01

    The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Beyond pain: modeling decision-making deficits in chronic pain

    PubMed Central

    Hess, Leonardo Emanuel; Haimovici, Ariel; Muñoz, Miguel Angel; Montoya, Pedro

    2014-01-01

    Risky decision-making seems to be markedly disrupted in patients with chronic pain, probably due to the high cost that impose pain and negative mood on executive control functions. Patients’ behavioral performance on decision-making tasks such as the Iowa Gambling Task (IGT) is characterized by selecting cards more frequently from disadvantageous than from advantageous decks, and by switching often between competing responses in comparison with healthy controls (HCs). In the present study, we developed a simple heuristic model to simulate individuals’ choice behavior by varying the level of decision randomness and the importance given to gains and losses. The findings revealed that the model was able to differentiate the behavioral performance of patients with chronic pain and HCs at the group, as well as at the individual level. The best fit of the model in patients with chronic pain was yielded when decisions were not based on previous choices and when gains were considered more relevant than losses. By contrast, the best account of the available data in HCs was obtained when decisions were based on previous experiences and losses loomed larger than gains. In conclusion, our model seems to provide useful information to measure each individual participant extensively, and to deal with the data on a participant-by-participant basis. PMID:25136301

  7. WRF-Chem Simulations of Lightning-NOx Production and Transport in Oklahoma and Colorado Thunderstorms Observed During DC3

    NASA Technical Reports Server (NTRS)

    Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.; Basarab, B.; hide

    2016-01-01

    The focus of this analysis is on lightning-generated nitrogen oxides (LNOx) and their distribution for two thunderstorms observed during the Deep Convective Clouds and Chemistry (DC3) field campaign in May-June 2012. The Weather Research and Forecasting Chemistry (WRF-Chem) model is used to perform cloud-resolved simulations for the May 29-30 Oklahoma severe convection, which contained one supercell, and the June 6-7 Colorado squall line. Aircraft and ground-based observations (e.g., trace gases, lightning and radar) collected during DC3 are used in comparisons against the model-simulated lightning flashes generated by the flash rate parameterization schemes (FRPSs) incorporated into the model, as well as the model-simulated LNOx predicted in the anvil outflow. Newly generated FRPSs based on DC3 radar observations and Lightning Mapping Array data are implemented in the model, along with previously developed schemes from the literature. The results of these analyses will also be compared between storms to investigate which FRPSs were most appropriate for the two types of convection and to examine the variation in the LNOx production. The simulated LNOx results from WRF-Chem will also be compared against other previously studied mid-latitude thunderstorms.

  8. Beyond pain: modeling decision-making deficits in chronic pain.

    PubMed

    Hess, Leonardo Emanuel; Haimovici, Ariel; Muñoz, Miguel Angel; Montoya, Pedro

    2014-01-01

    Risky decision-making seems to be markedly disrupted in patients with chronic pain, probably due to the high cost that impose pain and negative mood on executive control functions. Patients' behavioral performance on decision-making tasks such as the Iowa Gambling Task (IGT) is characterized by selecting cards more frequently from disadvantageous than from advantageous decks, and by switching often between competing responses in comparison with healthy controls (HCs). In the present study, we developed a simple heuristic model to simulate individuals' choice behavior by varying the level of decision randomness and the importance given to gains and losses. The findings revealed that the model was able to differentiate the behavioral performance of patients with chronic pain and HCs at the group, as well as at the individual level. The best fit of the model in patients with chronic pain was yielded when decisions were not based on previous choices and when gains were considered more relevant than losses. By contrast, the best account of the available data in HCs was obtained when decisions were based on previous experiences and losses loomed larger than gains. In conclusion, our model seems to provide useful information to measure each individual participant extensively, and to deal with the data on a participant-by-participant basis.

  9. Validity of strong lensing statistics for constraints on the galaxy evolution model

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akiko; Futamase, Toshifumi

    2008-02-01

    We examine the usefulness of the strong lensing statistics to constrain the evolution of the number density of lensing galaxies by adopting the values of the cosmological parameters determined by recent Wilkinson Microwave Anisotropy Probe observation. For this purpose, we employ the lens-redshift test proposed by Kochanek and constrain the parameters in two evolution models, simple power-law model characterized by the power-law indexes νn and νv, and the evolution model by Mitchell et al. based on cold dark matter structure formation scenario. We use the well-defined lens sample from the Sloan Digital Sky Survey (SDSS) and this is similarly sized samples used in the previous studies. Furthermore, we adopt the velocity dispersion function of early-type galaxies based on SDSS DR1 and DR5. It turns out that the indexes of power-law model are consistent with the previous studies, thus our results indicate the mild evolution in the number and velocity dispersion of early-type galaxies out to z = 1. However, we found that the values for p and q used by Mitchell et al. are inconsistent with the presently available observational data. More complete sample is necessary to withdraw more realistic determination on these parameters.

  10. An optimized data fusion method and its application to improve lateral boundary conditions in winter for Pearl River Delta regional PM2.5 modeling, China

    NASA Astrophysics Data System (ADS)

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran

    2018-05-01

    Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.

  11. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  12. Blind prediction of noncanonical RNA structure at atomic accuracy.

    PubMed

    Watkins, Andrew M; Geniesse, Caleb; Kladwang, Wipapat; Zakrevsky, Paul; Jaeger, Luc; Das, Rhiju

    2018-05-01

    Prediction of RNA structure from nucleotide sequence remains an unsolved grand challenge of biochemistry and requires distinct concepts from protein structure prediction. Despite extensive algorithmic development in recent years, modeling of noncanonical base pairs of new RNA structural motifs has not been achieved in blind challenges. We report a stepwise Monte Carlo (SWM) method with a unique add-and-delete move set that enables predictions of noncanonical base pairs of complex RNA structures. A benchmark of 82 diverse motifs establishes the method's general ability to recover noncanonical pairs ab initio, including multistrand motifs that have been refractory to prior approaches. In a blind challenge, SWM models predicted nucleotide-resolution chemical mapping and compensatory mutagenesis experiments for three in vitro selected tetraloop/receptors with previously unsolved structures (C7.2, C7.10, and R1). As a final test, SWM blindly and correctly predicted all noncanonical pairs of a Zika virus double pseudoknot during a recent community-wide RNA-Puzzle. Stepwise structure formation, as encoded in the SWM method, enables modeling of noncanonical RNA structure in a variety of previously intractable problems.

  13. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  14. Colonic stem cell data are consistent with the immortal model of stem cell division under non-random strand segregation.

    PubMed

    Walters, K

    2009-06-01

    Colonic stem cells are thought to reside towards the base of crypts of the colon, but their numbers and proliferation mechanisms are not well characterized. A defining property of stem cells is that they are able to divide asymmetrically, but it is not known whether they always divide asymmetrically (immortal model) or whether there are occasional symmetrical divisions (stochastic model). By measuring diversity of methylation patterns in colon crypt samples, a recent study found evidence in favour of the stochastic model, assuming random segregation of stem cell DNA strands during cell division. Here, the effect of preferential segregation of the template strand is considered to be consistent with the 'immortal strand hypothesis', and explore the effect on conclusions of previously published results. For a sample of crypts, it is shown how, under the immortal model, to calculate mean and variance of the number of unique methylation patterns allowing for non-random strand segregation and compare them with those observed. The calculated mean and variance are consistent with an immortal model that incorporates non-random strand segregation for a range of stem cell numbers and levels of preferential strand segregation. Allowing for preferential strand segregation considerably alters previously published conclusions relating to stem cell numbers and turnover mechanisms. Evidence in favour of the stochastic model may not be as strong as previously thought.

  15. Comparison of Models for Bubonic Plague Reveals Unique Pathogen Adaptations to the Dermis.

    PubMed

    Gonzalez, Rodrigo J; Weening, Eric H; Lane, M Chelsea; Miller, Virginia L

    2015-07-01

    Vector-borne pathogens are inoculated in the skin of mammals, most likely in the dermis. Despite this, subcutaneous (s.c.) models of infection are broadly used in many fields, including Yersinia pestis pathogenesis. We expand on a previous report where we implemented intradermal (i.d.) inoculations to study bacterial dissemination during bubonic plague and compare this model with an s.c. We found that i.d. inoculations result in faster kinetics of infection and that bacterial dose influenced mouse survival after i.d. but not s.c. inoculation. Moreover, a deletion mutant of rovA, previously shown to be moderately attenuated in the s.c. model, was severely attenuated in the i.d. Lastly, based on previous observations where a population bottleneck from the skin to lymph nodes was observed after i.d., but not after s.c., inoculations, we used the latter model as a strategy to identify an additional bottleneck in bacterial dissemination from lymph nodes to the bloodstream. Our data indicate that the more biologically relevant i.d. model of bubonic plague differs significantly from the s.c. model in multiple aspects of infection. These findings reveal adaptations of Y. pestis to the dermis and how these adaptations can define the progression of disease. They also emphasize the importance of using a relevant route of infection when addressing host-pathogen interactions. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  16. Analytical modeling of a sandwiched plate piezoelectric transformer-based acoustic-electric transmission channel.

    PubMed

    Lawry, Tristan J; Wilt, Kyle R; Scarton, Henry A; Saulnier, Gary J

    2012-11-01

    The linear propagation of electromagnetic and dilatational waves through a sandwiched plate piezoelectric transformer (SPPT)-based acoustic-electric transmission channel is modeled using the transfer matrix method with mixed-domain two-port ABCD parameters. This SPPT structure is of great interest because it has been explored in recent years as a mechanism for wireless transmission of electrical signals through solid metallic barriers using ultrasound. The model we present is developed to allow for accurate channel performance prediction while greatly reducing the computational complexity associated with 2- and 3-dimensional finite element analysis. As a result, the model primarily considers 1-dimensional wave propagation; however, approximate solutions for higher-dimensional phenomena (e.g., diffraction in the SPPT's metallic core layer) are also incorporated. The model is then assessed by comparing it to the measured wideband frequency response of a physical SPPT-based channel from our previous work. Very strong agreement between the modeled and measured data is observed, confirming the accuracy and utility of the presented model.

  17. An Experimental and Computational Analysis of Primary Cilia Deflection Under Fluid Flow

    PubMed Central

    Downs, Matthew E.; Nguyen, An M.; Herzog, Florian A.; Hoey, David A.; Jacobs, Christopher R.

    2013-01-01

    In this work we have developed a novel model of the deflection of primary cilia experiencing fluid flow accounting for phenomena not previously considered. Specifically, we developed a large rotation formulation that accounts for rotation at the base of the cilium, the initial shape of the cilium and fluid drag at high deflection angles. We utilized this model to analyze full three dimensional datasets of primary cilia deflecting under fluid flow acquired with high-speed confocal microscopy. We found a wide variety of previously unreported bending shapes and behaviors. We also analyzed post-flow relaxation patterns. Results from our combined experimental and theoretical approach suggest that the average flexural rigidity of primary cilia might be higher than previously reported (Schwartz et al. 1997). In addition our findings indicate the mechanics of primary cilia are richly varied and mechanisms may exist to alter their mechanical behavior. PMID:22452422

  18. Unconventional Constraints on Nitrogen Chemistry using DC3 Observations and Trajectory-based Chemical Modeling

    NASA Astrophysics Data System (ADS)

    Shu, Q.; Henderson, B. H.

    2017-12-01

    Chemical transport models underestimate nitrogen dioxide observations in the upper troposphere (UT). Previous research in the UT succeeded in combining model predictions with field campaign measurements to demonstrate that the nitric acid formation rate (HO + NO2 → HNO3 (R1)) is overestimated by 22% (Henderson et al., 2012). A subsequent publication (Seltzer et al., 2015) demonstrated that single chemical constraint alters ozone and aerosol formation/composition. This work attempts to replicate previous chemical constraints with newer observations and a different modeling framework. We apply the previously successful constraint framework to Deep Convection Clouds and Chemistry (DC3). DC3 is a more recent field campaign where simulated nitrogen imbalances still exist. Freshly convected air parcels, identified in the DC3 dataset, as initial coordinates to initiate Lagrangian trajectories. Along each trajectory, we simulate the air parcel chemical state. Samples along the trajectories will form ensembles that represent possible realizations of UT air parcels. We then apply Bayesian inference to constrain nitrogen chemistry and compare results to the existing literature. Our anticipated results will confirm overestimation of HNO3 formation rate in previous work and provide further constraints on other nitrogen reaction rate coefficients that affect terminal products from NOx. We will particularly focus on organic nitrate chemistry that laboratory literature has yet to fully address. The results will provide useful insights into nitrogen chemistry that affects climate and human health.

  19. Multiple solutions and numerical analysis to the dynamic and stationary models coupling a delayed energy balance model involving latent heat and discontinuous albedo with a deep ocean.

    PubMed

    Díaz, J I; Hidalgo, A; Tello, L

    2014-10-08

    We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge-Kutta total variation diminishing for time integration.

  20. The phenotypic equilibrium of cancer cells: From average-level stability to path-wise convergence.

    PubMed

    Niu, Yuanling; Wang, Yue; Zhou, Da

    2015-12-07

    The phenotypic equilibrium, i.e. heterogeneous population of cancer cells tending to a fixed equilibrium of phenotypic proportions, has received much attention in cancer biology very recently. In the previous literature, some theoretical models were used to predict the experimental phenomena of the phenotypic equilibrium, which were often explained by different concepts of stabilities of the models. Here we present a stochastic multi-phenotype branching model by integrating conventional cellular hierarchy with phenotypic plasticity mechanisms of cancer cells. Based on our model, it is shown that: (i) our model can serve as a framework to unify the previous models for the phenotypic equilibrium, and then harmonizes the different kinds of average-level stabilities proposed in these models; and (ii) path-wise convergence of our model provides a deeper understanding to the phenotypic equilibrium from stochastic point of view. That is, the emergence of the phenotypic equilibrium is rooted in the stochastic nature of (almost) every sample path, the average-level stability just follows from it by averaging stochastic samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Multiple solutions and numerical analysis to the dynamic and stationary models coupling a delayed energy balance model involving latent heat and discontinuous albedo with a deep ocean

    PubMed Central

    Díaz, J. I.; Hidalgo, A.; Tello, L.

    2014-01-01

    We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration. PMID:25294969

  2. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding.

    PubMed

    Reverter, Enric; Tandon, Puneeta; Augustin, Salvador; Turon, Fanny; Casu, Stefania; Bastiampillai, Ravin; Keough, Adam; Llop, Elba; González, Antonio; Seijo, Susana; Berzigotti, Annalisa; Ma, Mang; Genescà, Joan; Bosch, Jaume; García-Pagán, Joan Carles; Abraldes, Juan G

    2014-02-01

    Patients with cirrhosis with acute variceal bleeding (AVB) have high mortality rates (15%-20%). Previously described models are seldom used to determine prognoses of these patients, partially because they have not been validated externally and because they include subjective variables, such as bleeding during endoscopy and Child-Pugh score, which are evaluated inconsistently. We aimed to improve determination of risk for patients with AVB. We analyzed data collected from 178 patients with cirrhosis (Child-Pugh scores of A, B, and C: 15%, 57%, and 28%, respectively) and esophageal AVB who received standard therapy from 2007 through 2010. We tested the performance (discrimination and calibration) of previously described models, including the model for end-stage liver disease (MELD), and developed a new MELD calibration to predict the mortality of patients within 6 weeks of presentation with AVB. MELD-based predictions were validated in cohorts of patients from Canada (n = 240) and Spain (n = 221). Among study subjects, the 6-week mortality rate was 16%. MELD was the best model in terms of discrimination; it was recalibrated to predict the 6-week mortality rate with logistic regression (logit, -5.312 + 0.207 • MELD; bootstrapped R(2), 0.3295). MELD values of 19 or greater predicted 20% or greater mortality, whereas MELD scores less than 11 predicted less than 5% mortality. The model performed well for patients from Canada at all risk levels. In the Spanish validation set, in which all patients were treated with banding ligation, MELD predictions were accurate up to the 20% risk threshold. We developed a MELD-based model that accurately predicts mortality among patients with AVB, based on objective variables available at admission. This model could be useful to evaluate the efficacy of new therapies and stratify patients in randomized trials. Copyright © 2014 AGA Institute. Published by Elsevier Inc. All rights reserved.

  3. Investigating a continuous shear strain function for depth-dependent properties of native and tissue engineering cartilage using pixel-size data.

    PubMed

    Motavalli, Mostafa; Whitney, G Adam; Dennis, James E; Mansour, Joseph M

    2013-12-01

    A previously developed novel imaging technique for determining the depth dependent properties of cartilage in simple shear is implemented. Shear displacement is determined from images of deformed lines photobleached on a sample, and shear strain is obtained from the derivative of the displacement. We investigated the feasibility of an alternative systematic approach to numerical differentiation for computing the shear strain that is based on fitting a continuous function to the shear displacement. Three models for a continuous shear displacement function are evaluated: polynomials, cubic splines, and non-parametric locally weighted scatter plot curves. Four independent approaches are then applied to identify the best-fit model and the accuracy of the first derivative. One approach is based on the Akaiki Information Criteria, and the Bayesian Information Criteria. The second is based on a method developed to smooth and differentiate digitized data from human motion. The third method is based on photobleaching a predefined circular area with a specific radius. Finally, we integrate the shear strain and compare it with the total shear deflection of the sample measured experimentally. Results show that 6th and 7th order polynomials are the best models for the shear displacement and its first derivative. In addition, failure of tissue-engineered cartilage, consistent with previous results, demonstrates the qualitative value of this imaging approach. © 2013 Elsevier Ltd. All rights reserved.

  4. Application of Molecular Interaction Volume Model for Phase Equilibrium of Sn-Based Binary System in Vacuum Distillation

    NASA Astrophysics Data System (ADS)

    Kong, Lingxin; Yang, Bin; Xu, Baoqiang; Li, Yifu

    2014-09-01

    Based on the molecular interaction volume model (MIVM), the activities of components of Sn-Sb, Sb-Bi, Sn-Zn, Sn-Cu, and Sn-Ag alloys were predicted. The predicted values are in good agreement with the experimental data, which indicate that the MIVM is of better stability and reliability due to its good physical basis. A significant advantage of the MIVM lies in its ability to predict the thermodynamic properties of liquid alloys using only two parameters. The phase equilibria of Sn-Sb and Sn-Bi alloys were calculated based on the properties of pure components and the activity coefficients, which indicates that Sn-Sb and Sn-Bi alloys can be separated thoroughly by vacuum distillation. This study extends previous investigations and provides an effective and convenient model on which to base refining simulations for Sn-based alloys.

  5. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  6. A dynamic model of reasoning and memory.

    PubMed

    Hawkins, Guy E; Hayes, Brett K; Heit, Evan

    2016-02-01

    Previous models of category-based induction have neglected how the process of induction unfolds over time. We conceive of induction as a dynamic process and provide the first fine-grained examination of the distribution of response times observed in inductive reasoning. We used these data to develop and empirically test the first major quantitative modeling scheme that simultaneously accounts for inductive decisions and their time course. The model assumes that knowledge of similarity relations among novel test probes and items stored in memory drive an accumulation-to-bound sequential sampling process: Test probes with high similarity to studied exemplars are more likely to trigger a generalization response, and more rapidly, than items with low exemplar similarity. We contrast data and model predictions for inductive decisions with a recognition memory task using a common stimulus set. Hierarchical Bayesian analyses across 2 experiments demonstrated that inductive reasoning and recognition memory primarily differ in the threshold to trigger a decision: Observers required less evidence to make a property generalization judgment (induction) than an identity statement about a previously studied item (recognition). Experiment 1 and a condition emphasizing decision speed in Experiment 2 also found evidence that inductive decisions use lower quality similarity-based information than recognition. The findings suggest that induction might represent a less cautious form of recognition. We conclude that sequential sampling models grounded in exemplar-based similarity, combined with hierarchical Bayesian analysis, provide a more fine-grained and informative analysis of the processes involved in inductive reasoning than is possible solely through examination of choice data. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  7. The assessment of the performance of covariance-based structural equation modeling and partial least square path modeling

    NASA Astrophysics Data System (ADS)

    Aimran, Ahmad Nazim; Ahmad, Sabri; Afthanorhan, Asyraf; Awang, Zainudin

    2017-05-01

    Structural equation modeling (SEM) is the second generation statistical analysis technique developed for analyzing the inter-relationships among multiple variables in a model. Previous studies have shown that there seemed to be at least an implicit agreement about the factors that should drive the choice between covariance-based structural equation modeling (CB-SEM) and partial least square path modeling (PLS-PM). PLS-PM appears to be the preferred method by previous scholars because of its less stringent assumption and the need to avoid the perceived difficulties in CB-SEM. Along with this issue has been the increasing debate among researchers on the use of CB-SEM and PLS-PM in studies. The present study intends to assess the performance of CB-SEM and PLS-PM as a confirmatory study in which the findings will contribute to the body of knowledge of SEM. Maximum likelihood (ML) was chosen as the estimator for CB-SEM and was expected to be more powerful than PLS-PM. Based on the balanced experimental design, the multivariate normal data with specified population parameter and sample sizes were generated using Pro-Active Monte Carlo simulation, and the data were analyzed using AMOS for CB-SEM and SmartPLS for PLS-PM. Comparative Bias Index (CBI), construct relationship, average variance extracted (AVE), composite reliability (CR), and Fornell-Larcker criterion were used to study the consequence of each estimator. The findings conclude that CB-SEM performed notably better than PLS-PM in estimation for large sample size (100 and above), particularly in terms of estimations accuracy and consistency.

  8. Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, Richard Edward; Cetiner, Sacit M.; Fugate, David L.

    Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individualmore » component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.« less

  9. Of goals and habits: age-related and individual differences in goal-directed decision-making.

    PubMed

    Eppinger, Ben; Walter, Maik; Heekeren, Hauke R; Li, Shu-Chen

    2013-01-01

    In this study we investigated age-related and individual differences in habitual (model-free) and goal-directed (model-based) decision-making. Specifically, we were interested in three questions. First, does age affect the balance between model-based and model-free decision mechanisms? Second, are these age-related changes due to age differences in working memory (WM) capacity? Third, can model-based behavior be affected by manipulating the distinctiveness of the reward value of choice options? To answer these questions we used a two-stage Markov decision task in in combination with computational modeling to dissociate model-based and model-free decision mechanisms. To affect model-based behavior in this task we manipulated the distinctiveness of reward probabilities of choice options. The results show age-related deficits in model-based decision-making, which are particularly pronounced if unexpected reward indicates the need for a shift in decision strategy. In this situation younger adults explore the task structure, whereas older adults show perseverative behavior. Consistent with previous findings, these results indicate that older adults have deficits in the representation and updating of expected reward value. We also observed substantial individual differences in model-based behavior. In younger adults high WM capacity is associated with greater model-based behavior and this effect is further elevated when reward probabilities are more distinct. However, in older adults we found no effect of WM capacity. Moreover, age differences in model-based behavior remained statistically significant, even after controlling for WM capacity. Thus, factors other than decline in WM, such as deficits in the in the integration of expected reward value into strategic decisions may contribute to the observed impairments in model-based behavior in older adults.

  10. Of goals and habits: age-related and individual differences in goal-directed decision-making

    PubMed Central

    Eppinger, Ben; Walter, Maik; Heekeren, Hauke R.; Li, Shu-Chen

    2013-01-01

    In this study we investigated age-related and individual differences in habitual (model-free) and goal-directed (model-based) decision-making. Specifically, we were interested in three questions. First, does age affect the balance between model-based and model-free decision mechanisms? Second, are these age-related changes due to age differences in working memory (WM) capacity? Third, can model-based behavior be affected by manipulating the distinctiveness of the reward value of choice options? To answer these questions we used a two-stage Markov decision task in in combination with computational modeling to dissociate model-based and model-free decision mechanisms. To affect model-based behavior in this task we manipulated the distinctiveness of reward probabilities of choice options. The results show age-related deficits in model-based decision-making, which are particularly pronounced if unexpected reward indicates the need for a shift in decision strategy. In this situation younger adults explore the task structure, whereas older adults show perseverative behavior. Consistent with previous findings, these results indicate that older adults have deficits in the representation and updating of expected reward value. We also observed substantial individual differences in model-based behavior. In younger adults high WM capacity is associated with greater model-based behavior and this effect is further elevated when reward probabilities are more distinct. However, in older adults we found no effect of WM capacity. Moreover, age differences in model-based behavior remained statistically significant, even after controlling for WM capacity. Thus, factors other than decline in WM, such as deficits in the in the integration of expected reward value into strategic decisions may contribute to the observed impairments in model-based behavior in older adults. PMID:24399925

  11. A revised dislocation model of interseismic deformation of the Cascadia subduction zone

    USGS Publications Warehouse

    Wang, Kelin; Wells, Ray E.; Mazzotti, Stephane; Hyndman, Roy D.; Sagiya, Takeshi

    2003-01-01

    CAS3D‐2, a new three‐dimensional (3‐D) dislocation model, is developed to model interseismic deformation rates at the Cascadia subduction zone. The model is considered a snapshot description of the deformation field that changes with time. The effect of northward secular motion of the central and southern Cascadia forearc sliver is subtracted to obtain the effective convergence between the subducting plate and the forearc. Horizontal deformation data, including strain rates and surface velocities from Global Positioning System (GPS) measurements, provide primary geodetic constraints, but uplift rate data from tide gauges and leveling also provide important validations for the model. A locked zone, based on the results of previous thermal models constrained by heat flow observations, is located entirely offshore beneath the continental slope. Similar to previous dislocation models, an effective zone of downdip transition from locking to full slip is used, but the slip deficit rate is assumed to decrease exponentially with downdip distance. The exponential function resolves the problem of overpredicting coastal GPS velocities and underpredicting inland velocities by previous models that used a linear downdip transition. A wide effective transition zone (ETZ) partially accounts for stress relaxation in the mantle wedge that cannot be simulated by the elastic model. The pattern of coseismic deformation is expected to be different from that of interseismic deformation at present, 300 years after the last great subduction earthquake. The downdip transition from full rupture to no slip should take place over a much narrower zone.

  12. North Atlantic observations sharpen meridional overturning projections

    NASA Astrophysics Data System (ADS)

    Olson, R.; An, S.-I.; Fan, Y.; Evans, J. P.; Caesar, L.

    2018-06-01

    Atlantic Meridional Overturning Circulation (AMOC) projections are uncertain due to both model errors, as well as internal climate variability. An AMOC slowdown projected by many climate models is likely to have considerable effects on many aspects of global and North Atlantic climate. Previous studies to make probabilistic AMOC projections have broken new ground. However, they do not drift-correct or cross-validate the projections, and do not fully account for internal variability. Furthermore, they consider a limited subset of models, and ignore the skill of models at representing the temporal North Atlantic dynamics. We improve on previous work by applying Bayesian Model Averaging to weight 13 Coupled Model Intercomparison Project phase 5 models by their skill at modeling the AMOC strength, and its temporal dynamics, as approximated by the northern North-Atlantic temperature-based AMOC Index. We make drift-corrected projections accounting for structural model errors, and for the internal variability. Cross-validation experiments give approximately correct empirical coverage probabilities, which validates our method. Our results present more evidence that AMOC likely already started slowing down. While weighting considerably moderates and sharpens our projections, our results are at low end of previously published estimates. We project mean AMOC changes between periods 1960-1999 and 2060-2099 of -4.0 Sv and -6.8 Sv for RCP4.5 and RCP8.5 emissions scenarios respectively. The corresponding average 90% credible intervals for our weighted experiments are [-7.2, -1.2] and [-10.5, -3.7] Sv respectively for the two scenarios.

  13. Manipulative interplay of two adozelesin molecules with d(ATTAAT)₂achieving ligand-stacked Watson-Crick and Hoogsteen base-paired duplex adducts.

    PubMed

    Hopton, Suzanne R; Thompson, Andrew S

    2011-05-17

    Previous structural studies of the cyclopropapyrroloindole (CPI) antitumor antibiotics have shown that these ligands bind covalently edge-on into the minor groove of double-stranded DNA. Reversible covalent modification of the DNA via N3 of adenine occurs in a sequence-specific fashion. Early nuclear magnetic resonance and molecular modeling studies with both mono- and bis-alkylating ligands indicated that the ligands fit tightly within the minor groove, causing little distortion of the helix. In this study, we propose a new binding model for several of the CPI-based analogues, in which the aromatic secondary rings form π-stacked complexes within the minor groove. One of the adducts, formed with adozelesin and the d(ATTAAT)(2) sequence, also demonstrates the ability of these ligands to manipulate the DNA of the binding site, resulting in a Hoogsteen base-paired adduct. Although this type of base pairing has been previously observed with the bisfunctional CPI analogue bizelesin, this is the first time that such an observation has been made with a monoalkylating nondimeric analogue. Together, these results provide a new model for the design of CPI-based antitumor antibiotics, which also has a significant bearing on other structurally related and structurally unrelated minor groove-binding ligands. They indicate the dynamic nature of ligand-DNA interactions, demonstrating both DNA conformational flexibility and the ability of two DNA-bound ligands to interact to form stable covalent modified complexes.

  14. Experimental evidence of symmetry-breaking supercritical transition in pipe flow of shear-thinning fluids

    NASA Astrophysics Data System (ADS)

    Wen, Chaofan; Poole, Robert J.; Willis, Ashley P.; Dennis, David J. C.

    2017-03-01

    Experimental results reveal that the asymmetric flow of shear-thinning fluid through a cylindrical pipe, which was previously associated with the laminar-turbulent transition process, appears to have the characteristics of a nonhysteretic, supercritical instability of the laminar base state. Contrary to what was previously believed, classical transition is found to be responsible for returning symmetry to the flow. An absence of evidence of the instability in simulations (either linear or nonlinear) suggests that an element of physics is lacking in the commonly used rheological model for inelastic shear-thinning fluids. These unexpected discoveries raise new questions regarding the stability of these practically important fluids and how they can be successfully modeled.

  15. Multiscale modelling and analysis of collective decision making in swarm robotics.

    PubMed

    Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey

    2014-01-01

    We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable.

  16. Information security system based on virtual-optics imaging methodology and public key infrastructure

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Zhang, Peng; Cai, Lilong

    In this paper, we present a virtual-optical based information security system model with the aid of public-key-infrastructure (PKI) techniques. The proposed model employs a hybrid architecture in which our previously published encryption algorithm based on virtual-optics imaging methodology (VOIM) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). For an asymmetric system, given an encryption key, it is computationally infeasible to determine the decryption key and vice versa. The whole information security model is run under the framework of PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOIM security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network.

  17. Can metric-based approaches really improve multi-model climate projections? A perfect model framework applied to summer temperature change in France.

    NASA Astrophysics Data System (ADS)

    Boé, Julien; Terray, Laurent

    2014-05-01

    Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.

  18. The importance of expressing antimicrobial agents on water basis in growth/no growth interface models: a case study for Zygosaccharomyces bailii.

    PubMed

    Dang, T D T; Vermeulen, A; Mertens, L; Geeraerd, A H; Van Impe, J F; Devlieghere, F

    2011-01-31

    In a previous study on Zygosaccharomyces bailii, three growth/no growth models have been developed, predicting growth probability of the yeast at different conditions typical for acidified foods (Dang, T.D.T., Mertens, L., Vermeulen, A., Geeraerd, A.H., Van Impe, J.F., Debevere, J., Devlieghere, F., 2010. Modeling the growth/no growth boundary of Z. bailii in acidic conditions: A contribution to the alternative method to preserve foods without using chemical preservatives. International Journal of Food Microbiology 137, 1-12). In these broth-based models, the variables were pH, water activity and acetic acid, with acetic acid concentration expressed in volume % on the total culture medium (i.e., broth). To continue the previous study, validation experiments were performed for 15 selected combinations of intrinsic factors to assess the performance of the model at 22°C (60days) in a real food product (ketchup). Although the majority of experimental results were consistent, some remarkable deviations between prediction and validation were observed, e.g., Z. bailii growth occurred in conditions where almost no growth had been predicted. A thorough investigation revealed that the difference between two ways of expressing acetic acid concentration (i.e., on broth basis and on water basis) is rather significant, particularly for media containing high amounts of dry matter. Consequently, the use of broth-based concentrations in the models was not appropriate. Three models with acetic acid concentration expressed on water basis were established and it was observed that predictions by these models well matched the validation results; therefore a "systematic error" in broth-based models was recognized. In practice, quantities of antimicrobial agents are often calculated based on the water content of food products. Hence, to assure reliable predictions and facilitate the application of models (developed from lab media with high dry matter contents), it is important to express antimicrobial agents' concentrations on a common basis-the water content. Reviews over other published growth/no growth models in literature are carried out and expressions of the stress factors' concentrations (on broth basis) found in these models confirm this finding. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Elucidating the effects of river fluctuation on microbial removal during riverbank filtration

    NASA Astrophysics Data System (ADS)

    Derx, J.; Sommer, R.; Farnleitner, A. H.; Blaschke, A. P.

    2010-12-01

    The transfer of microbial pathogens from surface or waste water can have adverse effects on groundwater quality at riverbank filtration sites. Previous studies on groundwater protection in sandy unconfined aquifers with the focus on virus transport and health based water quality targets, such as done in the Netherlands, revealed larger protection zones than zones limited by 60 days of groundwater travel time. The 60 days of travel time are the design criterion in Austria for drinking water protection. However, in gravel aquifers, microbial transport processes differ significantly to those in sandy aquifers. Preferential flow and aquifer heterogeneities dominate microbial transport in sandy gravels and gravel aquifers. Microbial mass transfer and dual domain transport models were used previously to reproduce these effects. Furthermore, microbial transport has mainly been studied in the field during steady state groundwater flow situations. Hence, previous microbial transport models have seldom accounted for transient groundwater flow conditions. These dynamic flow conditions could have immense effects on the fate of microorganisms because of the variations in flow velocities, which are dominating microbial transport. In the current study, we used a variably saturated, three-dimensional groundwater flow and transport model coupled to a hydrodynamic surface water model at a riverbank filtration site. With this model, we estimated the required groundwater protection zones based on 8 log10 viral reductions and compared them to the 60 days travel time zones. The 8 log10 removal steps were based on a preliminary microbial risk assessment scheme for enteroviruses at the riverbank infiltration sites. The groundwater protection zones were estimated for a set of well withdrawal rates, river fluctuation ranges and frequencies, river gradients and bank slopes. The river flow dynamics and the morphology of the riverbed and banks are potentially important factors affecting microbial transport processes during riverbank filtration, which were previously not accounted for. Acknowledgments We would like to thank the Austrian Science Funds FWF for financial support as part of the Doctoral program DK-plus W1219-N22 on Water Resource Systems and the Vienna Waterworks (MA31) as part of the GWRS-Vienna project. We would also like to thank the MA39 (IFUM) for helping at the preliminary risk assessment.

  20. A Direct Method to Extract Transient Sub-Gap Density of State (DOS) Based on Dual Gate Pulse Spectroscopy

    NASA Astrophysics Data System (ADS)

    Dai, Mingzhi; Khan, Karim; Zhang, Shengnan; Jiang, Kemin; Zhang, Xingye; Wang, Weiliang; Liang, Lingyan; Cao, Hongtao; Wang, Pengjun; Wang, Peng; Miao, Lijing; Qin, Haiming; Jiang, Jun; Xue, Lixin; Chu, Junhao

    2016-06-01

    Sub-gap density of states (DOS) is a key parameter to impact the electrical characteristics of semiconductor materials-based transistors in integrated circuits. Previously, spectroscopy methodologies for DOS extractions include the static methods, temperature dependent spectroscopy and photonic spectroscopy. However, they might involve lots of assumptions, calculations, temperature or optical impacts into the intrinsic distribution of DOS along the bandgap of the materials. A direct and simpler method is developed to extract the DOS distribution from amorphous oxide-based thin-film transistors (TFTs) based on Dual gate pulse spectroscopy (GPS), introducing less extrinsic factors such as temperature and laborious numerical mathematical analysis than conventional methods. From this direct measurement, the sub-gap DOS distribution shows a peak value on the band-gap edge and in the order of 1017-1021/(cm3·eV), which is consistent with the previous results. The results could be described with the model involving both Gaussian and exponential components. This tool is useful as a diagnostics for the electrical properties of oxide materials and this study will benefit their modeling and improvement of the electrical properties and thus broaden their applications.

Top