Sample records for process-based models suggest

  1. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  2. Alterations in choice behavior by manipulations of world model.

    PubMed

    Green, C S; Benson, C; Kersten, D; Schrater, P

    2010-09-14

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.

  3. Alterations in choice behavior by manipulations of world model

    PubMed Central

    Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.

    2010-01-01

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507

  4. Simple Process-Based Simulators for Generating Spatial Patterns of Habitat Loss and Fragmentation: A Review and Introduction to the G-RaFFe Model

    PubMed Central

    Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108

  5. Simple process-based simulators for generating spatial patterns of habitat loss and fragmentation: a review and introduction to the G-RaFFe model.

    PubMed

    Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.

  6. Striatal and Hippocampal Entropy and Recognition Signals in Category Learning: Simultaneous Processes Revealed by Model-Based fMRI

    PubMed Central

    Davis, Tyler; Love, Bradley C.; Preston, Alison R.

    2012-01-01

    Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and adjust their representations to support behavior in future encounters. Many techniques that are available to understand the neural basis of category learning assume that the multiple processes that subserve it can be neatly separated between different trials of an experiment. Model-based functional magnetic resonance imaging offers a promising tool to separate multiple, simultaneously occurring processes and bring the analysis of neuroimaging data more in line with category learning’s dynamic and multifaceted nature. We use model-based imaging to explore the neural basis of recognition and entropy signals in the medial temporal lobe and striatum that are engaged while participants learn to categorize novel stimuli. Consistent with theories suggesting a role for the anterior hippocampus and ventral striatum in motivated learning in response to uncertainty, we find that activation in both regions correlates with a model-based measure of entropy. Simultaneously, separate subregions of the hippocampus and striatum exhibit activation correlated with a model-based recognition strength measure. Our results suggest that model-based analyses are exceptionally useful for extracting information about cognitive processes from neuroimaging data. Models provide a basis for identifying the multiple neural processes that contribute to behavior, and neuroimaging data can provide a powerful test bed for constraining and testing model predictions. PMID:22746951

  7. Edgar Schein's Process versus Content Consultation Models.

    ERIC Educational Resources Information Center

    Rockwood, Gary F.

    1993-01-01

    Describes Schein's three models of consultation based on assumptions inherent in different helping styles: purchase of expertise and doctor-patient models, which focus on content of organization problems; and process consultation model, which focuses on how organizational problems are solved. Notes that Schein has suggested that consultants begin…

  8. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  9. A stylistic classification of Russian-language texts based on the random walk model

    NASA Astrophysics Data System (ADS)

    Kramarenko, A. A.; Nekrasov, K. A.; Filimonov, V. V.; Zhivoderov, A. A.; Amieva, A. A.

    2017-09-01

    A formal approach to text analysis is suggested that is based on the random walk model. The frequencies and reciprocal positions of the vowel letters are matched up by a process of quasi-particle migration. Statistically significant difference in the migration parameters for the texts of different functional styles is found. Thus, a possibility of classification of texts using the suggested method is demonstrated. Five groups of the texts are singled out that can be distinguished from one another by the parameters of the quasi-particle migration process.

  10. Cognitive control predicts use of model-based reinforcement learning.

    PubMed

    Otto, A Ross; Skatova, Anya; Madlon-Kay, Seth; Daw, Nathaniel D

    2015-02-01

    Accounts of decision-making and its neural substrates have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental work suggest that this classic distinction between behaviorally and neurally dissociable systems for habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning (RL), called model-free and model-based RL, but the cognitive or computational processes by which one system may dominate over the other in the control of behavior is a matter of ongoing investigation. To elucidate this question, we leverage the theoretical framework of cognitive control, demonstrating that individual differences in utilization of goal-related contextual information--in the service of overcoming habitual, stimulus-driven responses--in established cognitive control paradigms predict model-based behavior in a separate, sequential choice task. The behavioral correspondence between cognitive control and model-based RL compellingly suggests that a common set of processes may underpin the two behaviors. In particular, computational mechanisms originally proposed to underlie controlled behavior may be applicable to understanding the interactions between model-based and model-free choice behavior.

  11. A Model of the Creative Process Based on Quantum Physics and Vedic Science.

    ERIC Educational Resources Information Center

    Rose, Laura Hall

    1988-01-01

    Using tenets from Vedic science and quantum physics, this model of the creative process suggests that the unified field of creation is pure consciousness, and that the development of the creative process within individuals mirrors the creative process within the universe. Rational and supra-rational creative thinking techniques are also described.…

  12. A Classification Model and an Open E-Learning System Based on Intuitionistic Fuzzy Sets for Instructional Design Concepts

    ERIC Educational Resources Information Center

    Güyer, Tolga; Aydogdu, Seyhmus

    2016-01-01

    This study suggests a classification model and an e-learning system based on this model for all instructional theories, approaches, models, strategies, methods, and technics being used in the process of instructional design that constitutes a direct or indirect resource for educational technology based on the theory of intuitionistic fuzzy sets…

  13. A dual-process perspective on fluency-based aesthetics: the pleasure-interest model of aesthetic liking.

    PubMed

    Graf, Laura K M; Landwehr, Jan R

    2015-11-01

    In this article, we develop an account of how aesthetic preferences can be formed as a result of two hierarchical, fluency-based processes. Our model suggests that processing performed immediately upon encountering an aesthetic object is stimulus driven, and aesthetic preferences that accrue from this processing reflect aesthetic evaluations of pleasure or displeasure. When sufficient processing motivation is provided by a perceiver's need for cognitive enrichment and/or the stimulus' processing affordance, elaborate perceiver-driven processing can emerge, which gives rise to fluency-based aesthetic evaluations of interest, boredom, or confusion. Because the positive outcomes in our model are pleasure and interest, we call it the Pleasure-Interest Model of Aesthetic Liking (PIA Model). Theoretically, this model integrates a dual-process perspective and ideas from lay epistemology into processing fluency theory, and it provides a parsimonious framework to embed and unite a wealth of aesthetic phenomena, including contradictory preference patterns for easy versus difficult-to-process aesthetic stimuli. © 2015 by the Society for Personality and Social Psychology, Inc.

  14. Devil is in the details: Using logic models to investigate program process.

    PubMed

    Peyton, David J; Scicchitano, Michael

    2017-12-01

    Theory-based logic models are commonly developed as part of requirements for grant funding. As a tool to communicate complex social programs, theory based logic models are an effective visual communication. However, after initial development, theory based logic models are often abandoned and remain in their initial form despite changes in the program process. This paper examines the potential benefits of committing time and resources to revising the initial theory driven logic model and developing detailed logic models that describe key activities to accurately reflect the program and assist in effective program management. The authors use a funded special education teacher preparation program to exemplify the utility of drill down logic models. The paper concludes with lessons learned from the iterative revision process and suggests how the process can lead to more flexible and calibrated program management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Processing speed enhances model-based over model-free reinforcement learning in the presence of high working memory functioning

    PubMed Central

    Schad, Daniel J.; Jünger, Elisabeth; Sebold, Miriam; Garbusow, Maria; Bernhardt, Nadine; Javadi, Amir-Homayoun; Zimmermann, Ulrich S.; Smolka, Michael N.; Heinz, Andreas; Rapp, Michael A.; Huys, Quentin J. M.

    2014-01-01

    Theories of decision-making and its neural substrates have long assumed the existence of two distinct and competing valuation systems, variously described as goal-directed vs. habitual, or, more recently and based on statistical arguments, as model-free vs. model-based reinforcement-learning. Though both have been shown to control choices, the cognitive abilities associated with these systems are under ongoing investigation. Here we examine the link to cognitive abilities, and find that individual differences in processing speed covary with a shift from model-free to model-based choice control in the presence of above-average working memory function. This suggests shared cognitive and neural processes; provides a bridge between literatures on intelligence and valuation; and may guide the development of process models of different valuation components. Furthermore, it provides a rationale for individual differences in the tendency to deploy valuation systems, which may be important for understanding the manifold neuropsychiatric diseases associated with malfunctions of valuation. PMID:25566131

  16. Reason and reaction: the utility of a dual-focus, dual-processing perspective on promotion and prevention of adolescent health risk behaviour.

    PubMed

    Gibbons, Frederick X; Houlihan, Amy E; Gerrard, Meg

    2009-05-01

    A brief overview of theories of health behaviour that are based on the expectancy-value perspective is presented. This approach maintains that health behaviours are the result of a deliberative decision-making process that involves consideration of behavioural options along with anticipated outcomes associated with those options. It is argued that this perspective is effective at explaining and predicting many types of health behaviour, including health-promoting actions (e.g. UV protection, condom use, smoking cessation), but less effective at predicting risky health behaviours, such as unprotected, casual sex, drunk driving or binge drinking. These are behaviours that are less reasoned or premeditated - especially among adolescents. An argument is made for incorporating elements of dual-processing theories in an effort to improve the 'utility' of these models. Specifically, it is suggested that adolescent health behaviour involves both analytic and heuristic processing. Both types of processing are incorporated in the prototype-willingness (prototype) model, which is described in some detail. Studies of health behaviour based on the expectancy-value perspective (e.g. theory of reasoned action) are reviewed, along with studies based on the prototype model. These two sets of studies together suggest that the dual-processing perspective, in general, and the prototype model, in particular, add to the predictive validity of expectancy-value models for predicting adolescent health behaviour. Research and interventions that incorporate elements of dual-processing and elements of expectancy-value are more effective at explaining and changing adolescent health behaviour than are those based on expectancy-value theories alone.

  17. How "Does" the Comforting Process Work? An Empirical Test of an Appraisal-Based Model of Comforting

    ERIC Educational Resources Information Center

    Jones, Susanne M.; Wirtz, John G.

    2006-01-01

    Burleson and Goldsmith's (1998) comforting model suggests an appraisal-based mechanism through which comforting messages can bring about a positive change in emotional states. This study is a first empirical test of three causal linkages implied by the appraisal-based comforting model. Participants (N=258) talked about an upsetting event with a…

  18. Cognitive Control Predicts Use of Model-Based Reinforcement-Learning

    PubMed Central

    Otto, A. Ross; Skatova, Anya; Madlon-Kay, Seth; Daw, Nathaniel D.

    2015-01-01

    Accounts of decision-making and its neural substrates have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental work suggest that this classic distinction between behaviorally and neurally dissociable systems for habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning (RL), called model-free and model-based RL, but the cognitive or computational processes by which one system may dominate over the other in the control of behavior is a matter of ongoing investigation. To elucidate this question, we leverage the theoretical framework of cognitive control, demonstrating that individual differences in utilization of goal-related contextual information—in the service of overcoming habitual, stimulus-driven responses—in established cognitive control paradigms predict model-based behavior in a separate, sequential choice task. The behavioral correspondence between cognitive control and model-based RL compellingly suggests that a common set of processes may underpin the two behaviors. In particular, computational mechanisms originally proposed to underlie controlled behavior may be applicable to understanding the interactions between model-based and model-free choice behavior. PMID:25170791

  19. A Modeling Approach to Fiber Fracture in Melt Impregnation

    NASA Astrophysics Data System (ADS)

    Ren, Feng; Zhang, Cong; Yu, Yang; Xin, Chunling; Tang, Ke; He, Yadong

    2017-02-01

    The effect of process variables such as roving pulling speed, melt temperature and number of pins on the fiber fracture during the processing of thermoplastic based composites was investigated in this study. The melt impregnation was used in this process of continuous glass fiber reinforced thermoplastic composites. Previous investigators have suggested a variety of models for melt impregnation, while comparatively little effort has been spent on modeling the fiber fracture caused by the viscous resin. Herein, a mathematical model was developed for impregnation process to predict the fiber fracture rate and describe the experimental results with the Weibull intensity distribution function. The optimal parameters of this process were obtained by orthogonal experiment. The results suggest that the fiber fracture is caused by viscous shear stress on fiber bundle in melt impregnation mold when pulling the fiber bundle.

  20. Decisionmaking in practice: The dynamics of muddling through.

    PubMed

    Flach, John M; Feufel, Markus A; Reynolds, Peter L; Parker, Sarah Henrickson; Kellogg, Kathryn M

    2017-09-01

    An alternative to conventional models that treat decisions as open-loop independent choices is presented. The alterative model is based on observations of work situations such as healthcare, where decisionmaking is more typically a closed-loop, dynamic, problem-solving process. The article suggests five important distinctions between the processes assumed by conventional models and the reality of decisionmaking in practice. It is suggested that the logic of abduction in the form of an adaptive, muddling through process is more consistent with the realities of practice in domains such as healthcare. The practical implication is that the design goal should not be to improve consistency with normative models of rationality, but to tune the representations guiding the muddling process to increase functional perspicacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Finding shared decisions in stakeholder networks: An agent-based approach

    NASA Astrophysics Data System (ADS)

    Le Pira, Michela; Inturri, Giuseppe; Ignaccolo, Matteo; Pluchino, Alessandro; Rapisarda, Andrea

    2017-01-01

    We address the problem of a participatory decision-making process where a shared priority list of alternatives has to be obtained while avoiding inconsistent decisions. An agent-based model (ABM) is proposed to mimic this process in different social networks of stakeholders who interact according to an opinion dynamics model. Simulations' results show the efficacy of interaction in finding a transitive and, above all, shared decision. These findings are in agreement with real participation experiences regarding transport planning decisions and can give useful suggestions on how to plan an effective participation process for sustainable policy-making based on opinion consensus.

  2. A Real Community Bridge: Informing Community-Based Learning through a Model of Participatory Public Art

    ERIC Educational Resources Information Center

    Stephens, Pamela Geiger

    2006-01-01

    Community-based learning has the power to encourage and sustain the intellectual curiosity of learners. By most accounts, community-based learning is a process that creates a collaborative environment of scholarship that holds individual differences, as well as similarities, in high esteem. It is a process, as the phrase suggests, that extends…

  3. REVIEWS OF TOPICAL PROBLEMS: Nonlinear dynamics of the brain: emotion and cognition

    NASA Astrophysics Data System (ADS)

    Rabinovich, Mikhail I.; Muezzinoglu, M. K.

    2010-07-01

    Experimental investigations of neural system functioning and brain activity are standardly based on the assumption that perceptions, emotions, and cognitive functions can be understood by analyzing steady-state neural processes and static tomographic snapshots. The new approaches discussed in this review are based on the analysis of transient processes and metastable states. Transient dynamics is characterized by two basic properties, structural stability and information sensitivity. The ideas and methods that we discuss provide an explanation for the occurrence of and successive transitions between metastable states observed in experiments, and offer new approaches to behavior analysis. Models of the emotional and cognitive functions of the brain are suggested. The mathematical object that represents the observed transient brain processes in the phase space of the model is a structurally stable heteroclinic channel. The possibility of using the suggested models to construct a quantitative theory of some emotional and cognitive functions is illustrated.

  4. Expert models and modeling processes associated with a computer-modeling tool

    NASA Astrophysics Data System (ADS)

    Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.

    2006-07-01

    Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.

  5. A KPI framework for process-based benchmarking of hospital information systems.

    PubMed

    Jahn, Franziska; Winter, Alfred

    2011-01-01

    Benchmarking is a major topic for monitoring, directing and elucidating the performance of hospital information systems (HIS). Current approaches neglect the outcome of the processes that are supported by the HIS and their contribution to the hospital's strategic goals. We suggest to benchmark HIS based on clinical documentation processes and their outcome. A framework consisting of a general process model and outcome criteria for clinical documentation processes is introduced.

  6. Assessing the Utility of the Willingness/Prototype Model in Predicting Help-Seeking Decisions

    ERIC Educational Resources Information Center

    Hammer, Joseph H.; Vogel, David L.

    2013-01-01

    Prior research on professional psychological help-seeking behavior has operated on the assumption that the decision to seek help is based on intentional and reasoned processes. However, research on the dual-process prototype/willingness model (PWM; Gerrard, Gibbons, Houlihan, Stock, & Pomery, 2008) suggests health-related decisions may also…

  7. Cognitive components underpinning the development of model-based learning.

    PubMed

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2017-06-01

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. A Grammar-based Approach for Modeling User Interactions and Generating Suggestions During the Data Exploration Process.

    PubMed

    Dabek, Filip; Caban, Jesus J

    2017-01-01

    Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system.

  9. Influence of branding on preference-based decision making.

    PubMed

    Philiastides, Marios G; Ratcliff, Roger

    2013-07-01

    Branding has become one of the most important determinants of consumer choices. Intriguingly, the psychological mechanisms of how branding influences decision making remain elusive. In the research reported here, we used a preference-based decision-making task and computational modeling to identify which internal components of processing are affected by branding. We found that a process of noisy temporal integration of subjective value information can model preference-based choices reliably and that branding biases are explained by changes in the rate of the integration process itself. This result suggests that branding information and subjective preference are integrated into a single source of evidence in the decision-making process, thereby altering choice behavior.

  10. Cognitive Components Underpinning the Development of Model-Based Learning

    PubMed Central

    Potter, Tracey C.S.; Bryce, Nessa V.; Hartley, Catherine A.

    2016-01-01

    Reinforcement learning theory distinguishes “model-free” learning, which fosters reflexive repetition of previously rewarded actions, from “model-based” learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9–25, we examined whether the abilities to infer sequential regularities in the environment (“statistical learning”), maintain information in an active state (“working memory”) and integrate distant concepts to solve problems (“fluid reasoning”) predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. PMID:27825732

  11. Recollection can be Weak and Familiarity can be Strong

    PubMed Central

    Ingram, Katherine M.; Mickes, Laura; Wixted, John T.

    2012-01-01

    The Remember/Know procedure is widely used to investigate recollection and familiarity in recognition memory, but almost all of the results obtained using that procedure can be readily accommodated by a unidimensional model based on signal-detection theory. The unidimensional model holds that Remember judgments reflect strong memories (associated with high confidence, high accuracy, and fast reaction times), whereas Know judgments reflect weaker memories (associated with lower confidence, lower accuracy, and slower reaction times). Although this is invariably true on average, a new two-dimensional account (the Continuous Dual-Process model) suggests that Remember judgments made with low confidence should be associated with lower old/new accuracy, but higher source accuracy, than Know judgments made with high confidence. We tested this prediction – and found evidence to support it – using a modified Remember/Know procedure in which participants were first asked to indicate a degree of recollection-based or familiarity-based confidence for each word presented on a recognition test and were then asked to recollect the color (red or blue) and screen location (top or bottom) associated with the word at study. For familiarity-based decisions, old/new accuracy increased with old/new confidence, but source accuracy did not (suggesting that stronger old/new memory was supported by higher degrees of familiarity). For recollection-based decisions, both old/new accuracy and source accuracy increased with old/new confidence (suggesting that stronger old/new memory was supported by higher degrees of recollection). These findings suggest that recollection and familiarity are continuous processes and that participants can indicate which process mainly contributed to their recognition decisions. PMID:21967320

  12. An overview of current applications, challenges, and future trends in distributed process-based models in hydrology

    USGS Publications Warehouse

    Fatichi, Simone; Vivoni, Enrique R.; Odgen, Fred L; Ivanov, Valeriy Y; Mirus, Benjamin B.; Gochis, David; Downer, Charles W; Camporese, Matteo; Davison, Jason H; Ebel, Brian A.; Jones, Norm; Kim, Jongho; Mascaro, Giuseppe; Niswonger, Richard G.; Restrepo, Pedro; Rigon, Riccardo; Shen, Chaopeng; Sulis, Mauro; Tarboton, David

    2016-01-01

    Process-based hydrological models have a long history dating back to the 1960s. Criticized by some as over-parameterized, overly complex, and difficult to use, a more nuanced view is that these tools are necessary in many situations and, in a certain class of problems, they are the most appropriate type of hydrological model. This is especially the case in situations where knowledge of flow paths or distributed state variables and/or preservation of physical constraints is important. Examples of this include: spatiotemporal variability of soil moisture, groundwater flow and runoff generation, sediment and contaminant transport, or when feedbacks among various Earth’s system processes or understanding the impacts of climate non-stationarity are of primary concern. These are situations where process-based models excel and other models are unverifiable. This article presents this pragmatic view in the context of existing literature to justify the approach where applicable and necessary. We review how improvements in data availability, computational resources and algorithms have made detailed hydrological simulations a reality. Avenues for the future of process-based hydrological models are presented suggesting their use as virtual laboratories, for design purposes, and with a powerful treatment of uncertainty.

  13. Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review.

    PubMed

    Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela

    2017-01-01

    Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and high-resolution modeling on large domains are discussed.

  14. Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review

    NASA Astrophysics Data System (ADS)

    Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela

    2017-11-01

    Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and high-resolution modeling on large domains are discussed.

  15. Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review

    NASA Astrophysics Data System (ADS)

    Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela

    Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and highresolution modeling on large domains are discussed.

  16. Contingency Management and Deliberative Decision-Making Processes.

    PubMed

    Regier, Paul S; Redish, A David

    2015-01-01

    Contingency management is an effective treatment for drug addiction. The current explanation for its success is rooted in alternative reinforcement theory. We suggest that alternative reinforcement theory is inadequate to explain the success of contingency management and produce a model based on demand curves that show how little the monetary rewards offered in this treatment would affect drug use. Instead, we offer an explanation of its success based on the concept that it accesses deliberative decision-making processes. We suggest that contingency management is effective because it offers a concrete and immediate alternative to using drugs, which engages deliberative processes, improves the ability of those deliberative processes to attend to non-drug options, and offsets more automatic action-selection systems. This theory makes explicit predictions that can be tested, suggests which users will be most helped by contingency management, and suggests improvements in its implementation.

  17. Contingency Management and Deliberative Decision-Making Processes

    PubMed Central

    Regier, Paul S.; Redish, A. David

    2015-01-01

    Contingency management is an effective treatment for drug addiction. The current explanation for its success is rooted in alternative reinforcement theory. We suggest that alternative reinforcement theory is inadequate to explain the success of contingency management and produce a model based on demand curves that show how little the monetary rewards offered in this treatment would affect drug use. Instead, we offer an explanation of its success based on the concept that it accesses deliberative decision-making processes. We suggest that contingency management is effective because it offers a concrete and immediate alternative to using drugs, which engages deliberative processes, improves the ability of those deliberative processes to attend to non-drug options, and offsets more automatic action-selection systems. This theory makes explicit predictions that can be tested, suggests which users will be most helped by contingency management, and suggests improvements in its implementation. PMID:26082725

  18. Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis

    NASA Astrophysics Data System (ADS)

    Gluhih, I. N.; Akhmadulin, R. K.

    2017-07-01

    One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.

  19. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions.

    PubMed

    Krajbich, Ian; Rangel, Antonio

    2011-08-16

    How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.

  20. A neuroanatomical model of space-based and object-centered processing in spatial neglect.

    PubMed

    Pedrazzini, Elena; Schnider, Armin; Ptak, Radek

    2017-11-01

    Visual attention can be deployed in space-based or object-centered reference frames. Right-hemisphere damage may lead to distinct deficits of space- or object-based processing, and such dissociations are thought to underlie the heterogeneous nature of spatial neglect. Previous studies have suggested that object-centered processing deficits (such as in copying, reading or line bisection) result from damage to retro-rolandic regions while impaired spatial exploration reflects damage to more anterior regions. However, this evidence is based on small samples and heterogeneous tasks. Here, we tested a theoretical model of neglect that takes in account the space- and object-based processing and relates them to neuroanatomical predictors. One hundred and one right-hemisphere-damaged patients were examined with classic neuropsychological tests and structural brain imaging. Relations between neglect measures and damage to the temporal-parietal junction, intraparietal cortex, insula and middle frontal gyrus were examined with two structural equation models by assuming that object-centered processing (involved in line bisection and single-word reading) and space-based processing (involved in cancelation tasks) either represented a unique latent variable or two distinct variables. Of these two models the latter had better explanatory power. Damage to the intraparietal sulcus was a significant predictor of object-centered, but not space-based processing, while damage to the temporal-parietal junction predicted space-based, but not object-centered processing. Space-based processing and object-centered processing were strongly intercorrelated, indicating that they rely on similar, albeit partly dissociated processes. These findings indicate that object-centered and space-based deficits in neglect are partly independent and result from superior parietal and inferior parietal damage, respectively.

  1. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  2. Application of agent-based system for bioprocess description and process improvement.

    PubMed

    Gao, Ying; Kipling, Katie; Glassey, Jarka; Willis, Mark; Montague, Gary; Zhou, Yuhong; Titchener-Hooker, Nigel J

    2010-01-01

    Modeling plays an important role in bioprocess development for design and scale-up. Predictive models can also be used in biopharmaceutical manufacturing to assist decision-making either to maintain process consistency or to identify optimal operating conditions. To predict the whole bioprocess performance, the strong interactions present in a processing sequence must be adequately modeled. Traditionally, bioprocess modeling considers process units separately, which makes it difficult to capture the interactions between units. In this work, a systematic framework is developed to analyze the bioprocesses based on a whole process understanding and considering the interactions between process operations. An agent-based approach is adopted to provide a flexible infrastructure for the necessary integration of process models. This enables the prediction of overall process behavior, which can then be applied during process development or once manufacturing has commenced, in both cases leading to the capacity for fast evaluation of process improvement options. The multi-agent system comprises a process knowledge base, process models, and a group of functional agents. In this system, agent components co-operate with each other in performing their tasks. These include the description of the whole process behavior, evaluating process operating conditions, monitoring of the operating processes, predicting critical process performance, and providing guidance to decision-making when coping with process deviations. During process development, the system can be used to evaluate the design space for process operation. During manufacture, the system can be applied to identify abnormal process operation events and then to provide suggestions as to how best to cope with the deviations. In all cases, the function of the system is to ensure an efficient manufacturing process. The implementation of the agent-based approach is illustrated via selected application scenarios, which demonstrate how such a framework may enable the better integration of process operations by providing a plant-wide process description to facilitate process improvement. Copyright 2009 American Institute of Chemical Engineers

  3. How Accumulated Real Life Stress Experience and Cognitive Speed Interact on Decision-Making Processes

    PubMed Central

    Friedel, Eva; Sebold, Miriam; Kuitunen-Paul, Sören; Nebe, Stephan; Veer, Ilya M.; Zimmermann, Ulrich S.; Schlagenhauf, Florian; Smolka, Michael N.; Rapp, Michael; Walter, Henrik; Heinz, Andreas

    2017-01-01

    Rationale: Advances in neurocomputational modeling suggest that valuation systems for goal-directed (deliberative) on one side, and habitual (automatic) decision-making on the other side may rely on distinct computational strategies for reinforcement learning, namely model-free vs. model-based learning. As a key theoretical difference, the model-based system strongly demands cognitive functions to plan actions prospectively based on an internal cognitive model of the environment, whereas valuation in the model-free system relies on rather simple learning rules from operant conditioning to retrospectively associate actions with their outcomes and is thus cognitively less demanding. Acute stress reactivity is known to impair model-based but not model-free choice behavior, with higher working memory capacity protecting the model-based system from acute stress. However, it is not clear which impact accumulated real life stress has on model-free and model-based decision systems and how this influence interacts with cognitive abilities. Methods: We used a sequential decision-making task distinguishing relative contributions of both learning strategies to choice behavior, the Social Readjustment Rating Scale questionnaire to assess accumulated real life stress, and the Digit Symbol Substitution Test to test cognitive speed in 95 healthy subjects. Results: Individuals reporting high stress exposure who had low cognitive speed showed reduced model-based but increased model-free behavioral control. In contrast, subjects exposed to accumulated real life stress with high cognitive speed displayed increased model-based performance but reduced model-free control. Conclusion: These findings suggest that accumulated real life stress exposure can enhance reliance on cognitive speed for model-based computations, which may ultimately protect the model-based system from the detrimental influences of accumulated real life stress. The combination of accumulated real life stress exposure and slower information processing capacities, however, might favor model-free strategies. Thus, the valence and preference of either system strongly depends on stressful experiences and individual cognitive capacities. PMID:28642696

  4. How Accumulated Real Life Stress Experience and Cognitive Speed Interact on Decision-Making Processes.

    PubMed

    Friedel, Eva; Sebold, Miriam; Kuitunen-Paul, Sören; Nebe, Stephan; Veer, Ilya M; Zimmermann, Ulrich S; Schlagenhauf, Florian; Smolka, Michael N; Rapp, Michael; Walter, Henrik; Heinz, Andreas

    2017-01-01

    Rationale: Advances in neurocomputational modeling suggest that valuation systems for goal-directed (deliberative) on one side, and habitual (automatic) decision-making on the other side may rely on distinct computational strategies for reinforcement learning, namely model-free vs. model-based learning. As a key theoretical difference, the model-based system strongly demands cognitive functions to plan actions prospectively based on an internal cognitive model of the environment, whereas valuation in the model-free system relies on rather simple learning rules from operant conditioning to retrospectively associate actions with their outcomes and is thus cognitively less demanding. Acute stress reactivity is known to impair model-based but not model-free choice behavior, with higher working memory capacity protecting the model-based system from acute stress. However, it is not clear which impact accumulated real life stress has on model-free and model-based decision systems and how this influence interacts with cognitive abilities. Methods: We used a sequential decision-making task distinguishing relative contributions of both learning strategies to choice behavior, the Social Readjustment Rating Scale questionnaire to assess accumulated real life stress, and the Digit Symbol Substitution Test to test cognitive speed in 95 healthy subjects. Results: Individuals reporting high stress exposure who had low cognitive speed showed reduced model-based but increased model-free behavioral control. In contrast, subjects exposed to accumulated real life stress with high cognitive speed displayed increased model-based performance but reduced model-free control. Conclusion: These findings suggest that accumulated real life stress exposure can enhance reliance on cognitive speed for model-based computations, which may ultimately protect the model-based system from the detrimental influences of accumulated real life stress. The combination of accumulated real life stress exposure and slower information processing capacities, however, might favor model-free strategies. Thus, the valence and preference of either system strongly depends on stressful experiences and individual cognitive capacities.

  5. Logical Reasoning versus Information Processing in the Dual-Strategy Model of Reasoning

    ERIC Educational Resources Information Center

    Markovits, Henry; Brisson, Janie; de Chantal, Pier-Luc

    2017-01-01

    One of the major debates concerning the nature of inferential reasoning is between counterexample-based strategies such as mental model theory and statistical strategies underlying probabilistic models. The dual-strategy model, proposed by Verschueren, Schaeken, & d'Ydewalle (2005a, 2005b), which suggests that people might have access to both…

  6. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  7. A Dynamic Theory of Reading.

    ERIC Educational Resources Information Center

    Wark, David M.

    The initial means for arriving at a dynamic model of reading were suggested in the form of "behaviormetric" research. A review of valid reading models noted those of Smith and Carrigan, Delacato, and Holmes as eminent, and it distinguished between models based on concrete evidence and metaphors of the reading process which are basically…

  8. Quantitative evaluation of specific vulnerability to nitrate for groundwater resource protection based on process-based simulation model.

    PubMed

    Huan, Huan; Wang, Jinsheng; Zhai, Yuanzheng; Xi, Beidou; Li, Juan; Li, Mingxiao

    2016-04-15

    It has been proved that groundwater vulnerability assessment is an effective tool for groundwater protection. Nowadays, quantitative assessment methods for specific vulnerability are scarce due to limited cognition of complicated contaminant fate and transport processes in the groundwater system. In this paper, process-based simulation model for specific vulnerability to nitrate using 1D flow and solute transport model in the unsaturated vadose zone is presented for groundwater resource protection. For this case study in Jilin City of northeast China, rate constants of denitrification and nitrification as well as adsorption constants of ammonium and nitrate in the vadose zone were acquired by laboratory experiments. The transfer time at the groundwater table t50 was taken as the specific vulnerability indicator. Finally, overall vulnerability was assessed by establishing the relationship between groundwater net recharge, layer thickness and t50. The results suggested that the most vulnerable regions of Jilin City were mainly distributed in the floodplain of Songhua River and Mangniu River. The least vulnerable areas mostly appear in the second terrace and back of the first terrace. The overall area of low, relatively low and moderate vulnerability accounted for 76% of the study area, suggesting the relatively low possibility of suffering nitrate contamination. In addition, the sensitivity analysis showed that the most sensitive factors of specific vulnerability in the vadose zone included the groundwater net recharge rate, physical properties of soil medium and rate constants of nitrate denitrification. By validating the suitability of the process-based simulation model for specific vulnerability and comparing with index-based method by a group of integrated indicators, more realistic and accurate specific vulnerability mapping could be acquired by the process-based simulation model acquiring. In addition, the advantages, disadvantages, constraint conditions and applying prospects of the quantitative approach for specific vulnerability assessment were discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Developing a Data Driven Process-Based Model for Remote Sensing of Ecosystem Production

    NASA Astrophysics Data System (ADS)

    Elmasri, B.; Rahman, A. F.

    2010-12-01

    Estimating ecosystem carbon fluxes at various spatial and temporal scales is essential for quantifying the global carbon cycle. Numerous models have been developed for this purpose using several environmental variables as well as vegetation indices derived from remotely sensed data. Here we present a data driven modeling approach for gross primary production (GPP) that is based on a process based model BIOME-BGC. The proposed model was run using available remote sensing data and it does not depend on look-up tables. Furthermore, this approach combines the merits of both empirical and process models, and empirical models were used to estimate certain input variables such as light use efficiency (LUE). This was achieved by using remotely sensed data to the mathematical equations that represent biophysical photosynthesis processes in the BIOME-BGC model. Moreover, a new spectral index for estimating maximum photosynthetic activity, maximum photosynthetic rate index (MPRI), is also developed and presented here. This new index is based on the ratio between the near infrared and the green bands (ρ858.5/ρ555). The model was tested and validated against MODIS GPP product and flux measurements from two eddy covariance flux towers located at Morgan Monroe State Forest (MMSF) in Indiana and Harvard Forest in Massachusetts. Satellite data acquired by the Advanced Microwave Scanning Radiometer (AMSR-E) and MODIS were used. The data driven model showed a strong correlation between the predicted and measured GPP at the two eddy covariance flux towers sites. This methodology produced better predictions of GPP than did the MODIS GPP product. Moreover, the proportion of error in the predicted GPP for MMSF and Harvard forest was dominated by unsystematic errors suggesting that the results are unbiased. The analysis indicated that maintenance respiration is one of the main factors that dominate the overall model outcome errors and improvement in maintenance respiration estimation will result in improved GPP predictions. Although there might be a room for improvements in our model outcomes through improved parameterization, our results suggest that such a methodology for running BIOME-BGC model based entirely on routinely available data can produce good predictions of GPP.

  10. The Process Model of Group-Based Emotion: Integrating Intergroup Emotion and Emotion Regulation Perspectives.

    PubMed

    Goldenberg, Amit; Halperin, Eran; van Zomeren, Martijn; Gross, James J

    2016-05-01

    Scholars interested in emotion regulation have documented the different goals and strategies individuals have for regulating their emotions. However, little attention has been paid to the regulation of group-based emotions, which are based on individuals' self-categorization as a group member and occur in response to situations perceived as relevant for that group. We propose a model for examining group-based emotion regulation that integrates intergroup emotions theory and the process model of emotion regulation. This synergy expands intergroup emotion theory by facilitating further investigation of different goals (i.e., hedonic or instrumental) and strategies (e.g., situation selection and modification strategies) used to regulate group-based emotions. It also expands emotion regulation research by emphasizing the role of self-categorization (e.g., as an individual or a group member) in the emotional process. Finally, we discuss the promise of this theoretical synergy and suggest several directions for future research on group-based emotion regulation. © 2015 by the Society for Personality and Social Psychology, Inc.

  11. Better and Worse: A Dual-Process Model of the Relationship between Core Self-evaluation and Work-Family Conflict.

    PubMed

    Yu, Kun

    2016-01-01

    Based on both resource allocation theory (Becker, 1965; Bergeron, 2007) and role theory (Katz and Kahn, 1978), the current study aims to uncover the relationship between core self-evaluation (CSE) and three dimensions of work interference with family (WIF). A dual-process model was proposed, in which both work stress and career resilience mediate the CSE-WIF relationship. The mediation model was tested with a sample of employees from various organizations ( N = 561). The results first showed that CSE was negatively related to time-based and strain-based WIF and positively related to behavior-based WIF via the mediation of work stress. Moreover, CSE was positively associated with behavior-based and strain-based WIF via the mediation of career resilience, suggesting that CSE may also have its "dark-side."

  12. Better and Worse: A Dual-Process Model of the Relationship between Core Self-evaluation and Work-Family Conflict

    PubMed Central

    Yu, Kun

    2016-01-01

    Based on both resource allocation theory (Becker, 1965; Bergeron, 2007) and role theory (Katz and Kahn, 1978), the current study aims to uncover the relationship between core self-evaluation (CSE) and three dimensions of work interference with family (WIF). A dual-process model was proposed, in which both work stress and career resilience mediate the CSE-WIF relationship. The mediation model was tested with a sample of employees from various organizations (N = 561). The results first showed that CSE was negatively related to time-based and strain-based WIF and positively related to behavior-based WIF via the mediation of work stress. Moreover, CSE was positively associated with behavior-based and strain-based WIF via the mediation of career resilience, suggesting that CSE may also have its “dark-side.” PMID:27790177

  13. Probabilistic modeling of discourse-aware sentence processing.

    PubMed

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2013-07-01

    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.

  14. A dual-process account of auditory change detection.

    PubMed

    McAnally, Ken I; Martin, Russell L; Eramudugolla, Ranmalee; Stuart, Geoffrey W; Irvine, Dexter R F; Mattingley, Jason B

    2010-08-01

    Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.

  15. Dual learning processes underlying human decision-making in reversal learning tasks: functional significance and evidence from the model fit to human behavior

    PubMed Central

    Bai, Yu; Katahira, Kentaro; Ohira, Hideki

    2014-01-01

    Humans are capable of correcting their actions based on actions performed in the past, and this ability enables them to adapt to a changing environment. The computational field of reinforcement learning (RL) has provided a powerful explanation for understanding such processes. Recently, the dual learning system, modeled as a hybrid model that incorporates value update based on reward-prediction error and learning rate modulation based on the surprise signal, has gained attention as a model for explaining various neural signals. However, the functional significance of the hybrid model has not been established. In the present study, we used computer simulation in a reversal learning task to address functional significance in a probabilistic reversal learning task. The hybrid model was found to perform better than the standard RL model in a large parameter setting. These results suggest that the hybrid model is more robust against the mistuning of parameters compared with the standard RL model when decision-makers continue to learn stimulus-reward contingencies, which can create abrupt changes. The parameter fitting results also indicated that the hybrid model fit better than the standard RL model for more than 50% of the participants, which suggests that the hybrid model has more explanatory power for the behavioral data than the standard RL model. PMID:25161635

  16. Solid methane in neutron radiation: Cryogenic moderators and cometary cryo volcanism

    NASA Astrophysics Data System (ADS)

    Kirichek, O.; Lawson, C. R.; Jenkins, D. M.; Ridley, C. J. T.; Haynes, D. J.

    2017-12-01

    The effect of ionizing radiation on solid methane has previously been an area of interest in the astrophysics community. In the late 1980s this interest was further boosted by the possibility of using solid methane as a moderating medium in spallation neutron sources. Here we present test results of solid methane moderators commissioned at the ISIS neutron source, and compare them with a model based on the theory of thermal explosion. Good agreement between the moderator test data and our model suggests that the process of radiolysis defect recombination happens at two different temperature ranges: the ;lower temperature; recombination process occurs at around 20 K, with the ;higher temperature; process taking place between 50 and 60 K. We discuss consequences of this mechanism for the designing and operation of solid methane moderators used in advanced neutron sources. We also discuss the possible role of radiolysis defect recombination processes in cryo-volcanism on comets, and suggest an application based on this phenomenon.

  17. Conceptual Model-Based Systems Biology: Mapping Knowledge and Discovering Gaps in the mRNA Transcription Cycle

    PubMed Central

    Somekh, Judith; Choder, Mordechai; Dori, Dov

    2012-01-01

    We propose a Conceptual Model-based Systems Biology framework for qualitative modeling, executing, and eliciting knowledge gaps in molecular biology systems. The framework is an adaptation of Object-Process Methodology (OPM), a graphical and textual executable modeling language. OPM enables concurrent representation of the system's structure—the objects that comprise the system, and behavior—how processes transform objects over time. Applying a top-down approach of recursively zooming into processes, we model a case in point—the mRNA transcription cycle. Starting with this high level cell function, we model increasingly detailed processes along with participating objects. Our modeling approach is capable of modeling molecular processes such as complex formation, localization and trafficking, molecular binding, enzymatic stimulation, and environmental intervention. At the lowest level, similar to the Gene Ontology, all biological processes boil down to three basic molecular functions: catalysis, binding/dissociation, and transporting. During modeling and execution of the mRNA transcription model, we discovered knowledge gaps, which we present and classify into various types. We also show how model execution enhances a coherent model construction. Identification and pinpointing knowledge gaps is an important feature of the framework, as it suggests where research should focus and whether conjectures about uncertain mechanisms fit into the already verified model. PMID:23308089

  18. An Australasian model license reassessment procedure for identifying potentially unsafe drivers.

    PubMed

    Fildes, Brian N; Charlton, Judith; Pronk, Nicola; Langford, Jim; Oxley, Jennie; Koppel, Sjaanie

    2008-08-01

    Most licensing jurisdictions in Australia currently employ age-based assessment programs as a means to manage older driver safety, yet available evidence suggests that these programs have no safety benefits. This paper describes a community referral-based model license re assessment procedure for identifying and assessing potentially unsafe drivers. While the model was primarily developed for assessing older driver fitness to drive, it could be applicable to other forms of driver impairment associated with increased crash risk. It includes a three-tier process of assessment, involving the use of validated and relevant assessment instruments. A case is argued that this process is a more systematic, transparent and effective process for managing older driver safety and thus more likely to be widely acceptable to the target community and licensing authorities than age-based practices.

  19. An assessment of the carbon balance of arctic tundra: comparisons among observations, process models, and atmospheric inversions

    USGS Publications Warehouse

    McGuire, A.D.; Christensen, T.R.; Hayes, D.; Heroult, A.; Euskirchen, E.; Yi, Y.; Kimball, J.S.; Koven, C.; Lafleur, P.; Miller, P.A.; Oechel, W.; Peylin, P.; Williams, M.

    2012-01-01

    Although arctic tundra has been estimated to cover only 8% of the global land surface, the large and potentially labile carbon pools currently stored in tundra soils have the potential for large emissions of carbon (C) under a warming climate. These emissions as radiatively active greenhouse gases in the form of both CO2 and CH4 could amplify global warming. Given the potential sensitivity of these ecosystems to climate change and the expectation that the Arctic will experience appreciable warming over the next century, it is important to assess whether responses of C exchange in tundra regions are likely to enhance or mitigate warming. In this study we compared analyses of C exchange of Arctic tundra between 1990–1999 and 2000–2006 among observations, regional and global applications of process-based terrestrial biosphere models, and atmospheric inversion models. Syntheses of the compilation of flux observations and of inversion model results indicate that the annual exchange of CO2 between arctic tundra and the atmosphere has large uncertainties that cannot be distinguished from neutral balance. The mean estimate from an ensemble of process-based model simulations suggests that arctic tundra acted as a sink for atmospheric CO2 in recent decades, but based on the uncertainty estimates it cannot be determined with confidence whether these ecosystems represent a weak or a strong sink. Tundra was 0.6 °C warmer in the 2000s compared to the 1990s. The central estimates of the observations, process-based models, and inversion models each identify stronger sinks in the 2000s compared with the 1990s. Similarly, the observations and the applications of regional process-based models suggest that CH4 emissions from arctic tundra have increased from the 1990s to 2000s. Based on our analyses of the estimates from observations, process-based models, and inversion models, we estimate that arctic tundra was a sink for atmospheric CO2 of 110 Tg C yr-1 (uncertainty between a sink of 291 Tg C yr-1 and a source of 80 Tg C yr-1) and a source of CH4 to the atmosphere of 19 Tg C yr-1 (uncertainty between sources of 8 and 29 Tg C yr-1). The suite of analyses conducted in this study indicate that it is clearly important to reduce uncertainties in the observations, process-based models, and inversions in order to better understand the degree to which Arctic tundra is influencing atmospheric CO2 and CH4 concentrations. The reduction of uncertainties can be accomplished through (1) the strategic placement of more CO2 and CH4 monitoring stations to reduce uncertainties in inversions, (2) improved observation networks of ground-based measurements of CO2 and CH4 exchange to understand exchange in response to disturbance and across gradients of hydrological variability, and (3) the effective transfer of information from enhanced observation networks into process-based models to improve the simulation of CO2 and CH4 exchange from arctic tundra to the atmosphere.

  20. Integrating Surface Modeling into the Engineering Design Graphics Curriculum

    ERIC Educational Resources Information Center

    Hartman, Nathan W.

    2006-01-01

    It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…

  1. Models and theories of prescribing decisions: A review and suggested a new model.

    PubMed

    Murshid, Mohsen Ali; Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the 'persuasion theory - elaboration likelihood model', the stimuli-response marketing model', the 'agency theory', the theory of planned behaviour,' and 'social power theory,' in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research.

  2. Negotiation-based Order Lot-Sizing Approach for Two-tier Supply Chain

    NASA Astrophysics Data System (ADS)

    Chao, Yuan; Lin, Hao Wen; Chen, Xili; Murata, Tomohiro

    This paper focuses on a negotiation based collaborative planning process for the determination of order lot-size over multi-period planning, and confined to a two-tier supply chain scenario. The aim is to study how negotiation based planning processes would be used to refine locally preferred ordering patterns, which would consequently affect the overall performance of the supply chain in terms of costs and service level. Minimal information exchanges in the form of mathematical models are suggested to represent the local preferences and used to support the negotiation processes.

  3. The Grief Resolution Process in Divorce.

    ERIC Educational Resources Information Center

    Crosby, John F.; And Others

    1983-01-01

    Compares grief in divorce to the Kubler-Ross model of grief resolution in bereavement in 17 persons who wrote essays about their divorce. The results suggested a conceptual model based on three chronological stages with linear progression through the stages, characterized by circularity within each stage. (JAC)

  4. Consultancy on Large-Scale Submerged Aerobic Cultivation Process Design - Final Technical Report: February 1, 2016 -- June 30, 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crater, Jason; Galleher, Connor; Lievense, Jeff

    NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integratedmore » black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.« less

  5. A Theoretical Analysis of the Perceptual Span based on SWIFT Simulations of the n + 2 Boundary Paradigm

    PubMed Central

    Risse, Sarah; Hohenstein, Sven; Kliegl, Reinhold; Engbert, Ralf

    2014-01-01

    Eye-movement experiments suggest that the perceptual span during reading is larger than the fixated word, asymmetric around the fixation position, and shrinks in size contingent on the foveal processing load. We used the SWIFT model of eye-movement control during reading to test these hypotheses and their implications under the assumption of graded parallel processing of all words inside the perceptual span. Specifically, we simulated reading in the boundary paradigm and analysed the effects of denying the model to have valid preview of a parafoveal word n + 2 two words to the right of fixation. Optimizing the model parameters for the valid preview condition only, we obtained span parameters with remarkably realistic estimates conforming to the empirical findings on the size of the perceptual span. More importantly, the SWIFT model generated parafoveal processing up to word n + 2 without fitting the model to such preview effects. Our results suggest that asymmetry and dynamic modulation are plausible properties of the perceptual span in a parallel word-processing model such as SWIFT. Moreover, they seem to guide the flexible distribution of processing resources during reading between foveal and parafoveal words. PMID:24771996

  6. Predictive representations can link model-based reinforcement learning to model-free mechanisms.

    PubMed

    Russek, Evan M; Momennejad, Ida; Botvinick, Matthew M; Gershman, Samuel J; Daw, Nathaniel D

    2017-09-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation.

  7. Predictive representations can link model-based reinforcement learning to model-free mechanisms

    PubMed Central

    Botvinick, Matthew M.

    2017-01-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation. PMID:28945743

  8. Mental health courts and their selection processes: modeling variation for consistency.

    PubMed

    Wolff, Nancy; Fabrikant, Nicole; Belenko, Steven

    2011-10-01

    Admission into mental health courts is based on a complicated and often variable decision-making process that involves multiple parties representing different expertise and interests. To the extent that eligibility criteria of mental health courts are more suggestive than deterministic, selection bias can be expected. Very little research has focused on the selection processes underpinning problem-solving courts even though such processes may dominate the performance of these interventions. This article describes a qualitative study designed to deconstruct the selection and admission processes of mental health courts. In this article, we describe a multi-stage, complex process for screening and admitting clients into mental health courts. The selection filtering model that is described has three eligibility screening stages: initial, assessment, and evaluation. The results of this study suggest that clients selected by mental health courts are shaped by the formal and informal selection criteria, as well as by the local treatment system.

  9. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  10. On the upscaling of process-based models in deltaic applications

    NASA Astrophysics Data System (ADS)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  11. Comprehensive School Reform: Allocating Federal Funds.

    ERIC Educational Resources Information Center

    Education Commission of the States, Denver, CO.

    This booklet is designed to assist state leaders as they develop their process for allocating funds to schools. It suggests components of a state-allocation process that are based on research and field experience with successfully implemented comprehensive school-reform (CSR) models. The document provides guidelines for defining the eligibility of…

  12. Identity-Based Motivation: Constraints and Opportunities in Consumer Research.

    PubMed

    Shavitt, Sharon; Torelli, Carlos J; Wong, Jimmy

    2009-07-01

    This commentary underscores the integrative nature of the identity-based motivation model (Oyserman, 2009). We situate the model within existing literatures in psychology and consumer behavior, and illustrate its novel elements with research examples. Special attention is devoted to, 1) how product- and brand-based affordances constrain identity-based motivation processes and, 2) the mindsets and action tendencies that can be triggered by specific cultural identities in pursuit of consumer goals. Future opportunities are suggested for researching the antecedents of product meanings and relevant identities.

  13. Inhibitory mechanism of the matching heuristic in syllogistic reasoning.

    PubMed

    Tse, Ping Ping; Moreno Ríos, Sergio; García-Madruga, Juan Antonio; Bajo Molina, María Teresa

    2014-11-01

    A number of heuristic-based hypotheses have been proposed to explain how people solve syllogisms with automatic processes. In particular, the matching heuristic employs the congruency of the quantifiers in a syllogism—by matching the quantifier of the conclusion with those of the two premises. When the heuristic leads to an invalid conclusion, successful solving of these conflict problems requires the inhibition of automatic heuristic processing. Accordingly, if the automatic processing were based on processing the set of quantifiers, no semantic contents would be inhibited. The mental model theory, however, suggests that people reason using mental models, which always involves semantic processing. Therefore, whatever inhibition occurs in the processing implies the inhibition of the semantic contents. We manipulated the validity of the syllogism and the congruency of the quantifier of its conclusion with those of the two premises according to the matching heuristic. A subsequent lexical decision task (LDT) with related words in the conclusion was used to test any inhibition of the semantic contents after each syllogistic evaluation trial. In the LDT, the facilitation effect of semantic priming diminished after correctly solved conflict syllogisms (match-invalid or mismatch-valid), but was intact after no-conflict syllogisms. The results suggest the involvement of an inhibitory mechanism of semantic contents in syllogistic reasoning when there is a conflict between the output of the syntactic heuristic and actual validity. Our results do not support a uniquely syntactic process of syllogistic reasoning but fit with the predictions based on mental model theory. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. A model for process representation and synthesis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thomas, R. H.

    1971-01-01

    The problem of representing groups of loosely connected processes is investigated, and a model for process representation useful for synthesizing complex patterns of process behavior is developed. There are three parts, the first part isolates the concepts which form the basis for the process representation model by focusing on questions such as: What is a process; What is an event; Should one process be able to restrict the capabilities of another? The second part develops a model for process representation which captures the concepts and intuitions developed in the first part. The model presented is able to describe both the internal structure of individual processes and the interface structure between interacting processes. Much of the model's descriptive power derives from its use of the notion of process state as a vehicle for relating the internal and external aspects of process behavior. The third part demonstrates by example that the model for process representation is a useful one for synthesizing process behavior patterns. In it the model is used to define a variety of interesting process behavior patterns. The dissertation closes by suggesting how the model could be used as a semantic base for a very potent language extension facility.

  15. Modeling dynamic processes at stage of formation of parts previously subjected to high-energy laser effects

    NASA Astrophysics Data System (ADS)

    Efimov, A. E.; Maksarov, V. V.; Timofeev, D. Y.

    2018-03-01

    The present paper states the impact of a technological system on piece’s roughness and shape accuracy via simulation modeling. For this purpose, a theory was formulated and a mathematical model was generated to justify self-oscillations in a system. The method of oscillations eliminations based on workpiece’s high-energy laser irradiation with the purpose of further processing were suggested in compliance with the adopted theory and model. Modeling the behaviour of a system with the transient phenomenon indicated the tendency of reducing self-oscillations in unstable processing modes, which has a positive effect under the conditions of practical implementation over piece’s roughness and accuracy.

  16. A work-centered cognitively based architecture for decision support: the work-centered infomediary layer (WIL) model

    NASA Astrophysics Data System (ADS)

    Zachary, Wayne; Eggleston, Robert; Donmoyer, Jason; Schremmer, Serge

    2003-09-01

    Decision-making is strongly shaped and influenced by the work context in which decisions are embedded. This suggests that decision support needs to be anchored by a model (implicit or explicit) of the work process, in contrast to traditional approaches that anchor decision support to either context free decision models (e.g., utility theory) or to detailed models of the external (e.g., battlespace) environment. An architecture for cognitively-based, work centered decision support called the Work-centered Informediary Layer (WIL) is presented. WIL separates decision support into three overall processes that build and dynamically maintain an explicit context model, use the context model to identify opportunities for decision support and tailor generic decision-support strategies to the current context and offer them to the system-user/decision-maker. The generic decision support strategies include such things as activity/attention aiding, decision process structuring, work performance support (selective, contextual automation), explanation/ elaboration, infosphere data retrieval, and what if/action-projection and visualization. A WIL-based application is a work-centered decision support layer that provides active support without intent inferencing, and that is cognitively based without requiring classical cognitive task analyses. Example WIL applications are detailed and discussed.

  17. An improvement in the calculation of the efficiency of oxidative phosphorylation and rate of energy dissipation in mitochondria

    NASA Astrophysics Data System (ADS)

    Ghafuri, Mohazabeh; Golfar, Bahareh; Nosrati, Mohsen; Hoseinkhani, Saman

    2014-12-01

    The process of ATP production is one of the most vital processes in living cells which happens with a high efficiency. Thermodynamic evaluation of this process and the factors involved in oxidative phosphorylation can provide a valuable guide for increasing the energy production efficiency in research and industry. Although energy transduction has been studied qualitatively in several researches, there are only few brief reviews based on mathematical models on this subject. In our previous work, we suggested a mathematical model for ATP production based on non-equilibrium thermodynamic principles. In the present study, based on the new discoveries on the respiratory chain of animal mitochondria, Golfar's model has been used to generate improved results for the efficiency of oxidative phosphorylation and the rate of energy loss. The results calculated from the modified coefficients for the proton pumps of the respiratory chain enzymes are closer to the experimental results and validate the model.

  18. A CLIPS-based expert system for the evaluation and selection of robots

    NASA Technical Reports Server (NTRS)

    Nour, Mohamed A.; Offodile, Felix O.; Madey, Gregory R.

    1994-01-01

    This paper describes the development of a prototype expert system for intelligent selection of robots for manufacturing operations. The paper first develops a comprehensive, three-stage process to model the robot selection problem. The decisions involved in this model easily lend themselves to an expert system application. A rule-based system, based on the selection model, is developed using the CLIPS expert system shell. Data about actual robots is used to test the performance of the prototype system. Further extensions to the rule-based system for data handling and interfacing capabilities are suggested.

  19. Theory and Practice in Participatory Research: Lessons from the Native Elder Care Study

    ERIC Educational Resources Information Center

    Goins, R. Turner; Garroutte, Eva Marie; Fox, Susan Leading; Geiger, Sarah Dee; Manson, Spero M.

    2011-01-01

    Models for community-based participatory research (CBPR) urge academic investigators to collaborate with communities to identify and pursue research questions, processes, and outcomes valuable to both partners. The tribal participatory research (TPR) conceptual model suggests modifications to CBPR to fit the special needs of American Indian…

  20. Form or function: Does focusing on body functionality protect women from body dissatisfaction when viewing media images?

    PubMed

    Mulgrew, Kate E; Tiggemann, Marika

    2018-01-01

    We examined whether shifting young women's ( N =322) attention toward functionality components of media-portrayed idealized images would protect against body dissatisfaction. Image type was manipulated via images of models in either an objectified body-as-object form or active body-as-process form; viewing focus was manipulated via questions about the appearance or functionality of the models. Social comparison was examined as a moderator. Negative outcomes were most pronounced within the process-related conditions (body-as-process images or functionality viewing focus) and for women who reported greater functionality comparison. Results suggest that functionality-based depictions, reflections, and comparisons may actually produce worse outcomes than those based on appearance.

  1. Simulating carbon and water fluxes at Arctic and boreal ecosystems in Alaska by optimizing the modified BIOME-BGC with eddy covariance data

    NASA Astrophysics Data System (ADS)

    Ueyama, M.; Kondo, M.; Ichii, K.; Iwata, H.; Euskirchen, E. S.; Zona, D.; Rocha, A. V.; Harazono, Y.; Nakai, T.; Oechel, W. C.

    2013-12-01

    To better predict carbon and water cycles in Arctic ecosystems, we modified a process-based ecosystem model, BIOME-BGC, by introducing new processes: change in active layer depth on permafrost and phenology of tundra vegetation. The modified BIOME-BGC was optimized using an optimization method. The model was constrained using gross primary productivity (GPP) and net ecosystem exchange (NEE) at 23 eddy covariance sites in Alaska, and vegetation/soil carbon from a literature survey. The model was used to simulate regional carbon and water fluxes of Alaska from 1900 to 2011. Simulated regional fluxes were validated with upscaled GPP, ecosystem respiration (RE), and NEE based on two methods: (1) a machine learning technique and (2) a top-down model. Our initial simulation suggests that the original BIOME-BGC with default ecophysiological parameters substantially underestimated GPP and RE for tundra and overestimated those fluxes for boreal forests. We will discuss how optimization using the eddy covariance data impacts the historical simulation by comparing the new version of the model with simulated results from the original BIOME-BGC with default ecophysiological parameters. This suggests that the incorporation of the active layer depth and plant phenology processes is important to include when simulating carbon and water fluxes in Arctic ecosystems.

  2. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  3. Examining Collaborative Knowledge Construction in Microblogging-Based Learning Environments

    ERIC Educational Resources Information Center

    Luo, Tian; Clifton, Lacey

    2017-01-01

    Aim/Purpose: The purpose of the study is to provide foundational research to exemplify how knowledge construction takes place in microblogging-based learning environments, to understand learner interaction representing the knowledge construction process, and to analyze learner perception, thereby suggesting a model of delivery for microblogging.…

  4. Functional Magnetic Resonance Imaging Clinical Trial of a Dual-Processing Treatment Protocol for Substance-Dependent Adults

    ERIC Educational Resources Information Center

    Matto, Holly C.; Hadjiyane, Maria C.; Kost, Michelle; Marshall, Jennifer; Wiley, Joseph; Strolin-Goltzman, Jessica; Khatiwada, Manish; VanMeter, John W.

    2014-01-01

    Objectives: Empirical evidence suggests substance dependence creates stress system dysregulation which, in turn, may limit the efficacy of verbal-based treatment interventions, as the recovering brain may not be functionally capable of executive level processing. Treatment models that target implicit functioning are necessary. Methods: An RCT was…

  5. Conformational analysis of a covalently cross-linked Watson-Crick base pair model.

    PubMed

    Jensen, Erik A; Allen, Benjamin D; Kishi, Yoshito; O'Leary, Daniel J

    2008-11-15

    Low-temperature NMR experiments and molecular modeling have been used to characterize the conformational behavior of a covalently cross-linked DNA base pair model. The data suggest that Watson-Crick or reverse Watson-Crick hydrogen bonding geometries have similar energies and can interconvert at low temperatures. This low-temperature process involves rotation about the crosslink CH(2)C(5') (psi) carbon-carbon bond, which is energetically preferred over the alternate CH(2)N(3) (phi) carbon-nitrogen bond rotation.

  6. Expanding the Circle of Knowledge: Reconceptualizing Successful Aging Among North American Older Indigenous Peoples.

    PubMed

    Pace, Jessica E; Grenier, Amanda

    2017-03-01

    Indigenous older peoples' voices and experiences remain largely absent in the dominant models and critical scholarship on aging and late life. This article examines the relevance of the model of successful aging for Indigenous peoples in North America. This article presents the results of a review of the published conceptual literature on successful aging among Indigenous peoples. Our intent was to explore the current state of the field of successful aging among Indigenous peoples and suggest dimensions that may be more reflective of Indigenous voices and experiences that leads to a more inclusive model of successful aging. Based on our review, we suggest four dimensions that may broaden understandings of successful aging to be more inclusive of Indigenous older people: health and wellness, empowerment and resilience, engagement and behavior, and connectedness. Our review suggests that Indigenous peoples' voices and experiences are beginning to be included in academic literature on successful aging. However, we suggest that understandings of successful aging be broadened based on our summative findings and a process of community involvement. Such processes can lead to the development of models that are more inclusive to a wide range of older people, including Indigenous older peoples. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Using texts in science education: cognitive processes and knowledge representation.

    PubMed

    van den Broek, Paul

    2010-04-23

    Texts form a powerful tool in teaching concepts and principles in science. How do readers extract information from a text, and what are the limitations in this process? Central to comprehension of and learning from a text is the construction of a coherent mental representation that integrates the textual information and relevant background knowledge. This representation engenders learning if it expands the reader's existing knowledge base or if it corrects misconceptions in this knowledge base. The Landscape Model captures the reading process and the influences of reader characteristics (such as working-memory capacity, reading goal, prior knowledge, and inferential skills) and text characteristics (such as content/structure of presented information, processing demands, and textual cues). The model suggests factors that can optimize--or jeopardize--learning science from text.

  8. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study.

    PubMed

    Klingner, Carsten M; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI.

  9. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study

    PubMed Central

    Klingner, Carsten M.; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W.

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI. PMID:28066197

  10. Purpose, Processes, Partnerships, and Products: 4Ps to advance Participatory Socio-Environmental Modeling

    NASA Astrophysics Data System (ADS)

    Gray, S. G.; Voinov, A. A.; Jordan, R.; Paolisso, M.

    2016-12-01

    Model-based reasoning is a basic part of human understanding, decision-making, and communication. Including stakeholders in environmental model building and analysis is an increasingly popular approach to understanding environmental change since stakeholders often hold valuable knowledge about socio-environmental dynamics and since collaborative forms of modeling produce important boundary objects used to collectively reason about environmental problems. Although the number of participatory modeling (PM) case studies and the number of researchers adopting these approaches has grown in recent years, the lack of standardized reporting and limited reproducibility have prevented PM's establishment and advancement as a cohesive field of study. We suggest a four dimensional framework that includes reporting on dimensions of: (1) the Purpose for selecting a PM approach (the why); (2) the Process by which the public was involved in model building or evaluation (the how); (3) the Partnerships formed (the who); and (4) the Products that resulted from these efforts (the what). We highlight four case studies that use common PM software-based approaches (fuzzy cognitive mapping, agent-based modeling, system dynamics, and participatory geospatial modeling) to understand human-environment interactions and the consequences of environmental changes, including bushmeat hunting in Tanzania and Cameroon, agricultural production and deforestation in Zambia, and groundwater management in India. We demonstrate how standardizing communication about PM case studies can lead to innovation and new insights about model-based reasoning in support of environmental policy development. We suggest that our 4P framework and reporting approach provides a way for new hypotheses to be identified and tested in the growing field of PM.

  11. Mathematical Simulation of the Process of Aerobic Treatment of Wastewater under Conditions of Diffusion and Mass Transfer Perturbations

    NASA Astrophysics Data System (ADS)

    Bomba, A. Ya.; Safonik, A. P.

    2018-05-01

    A mathematical model of the process of aerobic treatment of wastewater has been refined. It takes into account the interaction of bacteria, as well as of organic and biologically nonoxidizing substances under conditions of diffusion and mass transfer perturbations. An algorithm of the solution of the corresponding nonlinear perturbed problem of convection-diffusion-mass transfer type has been constructed, with a computer experiment carried out based on it. The influence of the concentration of oxygen and of activated sludge on the quality of treatment is shown. Within the framework of the model suggested, a possibility of automated control of the process of deposition of impurities in a biological filter depending on the initial parameters of the water medium is suggested.

  12. Mathematical Simulation of the Process of Aerobic Treatment of Wastewater under Conditions of Diffusion and Mass Transfer Perturbations

    NASA Astrophysics Data System (ADS)

    Bomba, A. Ya.; Safonik, A. P.

    2018-03-01

    A mathematical model of the process of aerobic treatment of wastewater has been refined. It takes into account the interaction of bacteria, as well as of organic and biologically nonoxidizing substances under conditions of diffusion and mass transfer perturbations. An algorithm of the solution of the corresponding nonlinear perturbed problem of convection-diffusion-mass transfer type has been constructed, with a computer experiment carried out based on it. The influence of the concentration of oxygen and of activated sludge on the quality of treatment is shown. Within the framework of the model suggested, a possibility of automated control of the process of deposition of impurities in a biological filter depending on the initial parameters of the water medium is suggested.

  13. Modelling episodic acidification of surface waters: the state of science.

    PubMed

    Eshleman, K N; Wigington, P J; Davies, T D; Tranter, M

    1992-01-01

    Field studies of chemical changes in surface waters associated with rainfall and snowmelt events have provided evidence of episodic acidification of lakes and streams in Europe and North America. Modelling these chemical changes is particularly challenging because of the variability associated with hydrological transport and chemical transformation processes in catchments. This paper provides a review of mathematical models that have been applied to the problem of episodic acidification. Several empirical approaches, including regression models, mixing models and time series models, support a strong hydrological interpretation of episodic acidification. Regional application of several models has suggested that acidic episodes (in which the acid neutralizing capacity becomes negative) are relatively common in surface waters in several regions of the US that receive acid deposition. Results from physically based models have suggested a lack of understanding of hydrological flowpaths, hydraulic residence times and biogeochemical reactions, particularly those involving aluminum. The ability to better predict episodic chemical responses of surface waters is thus dependent upon elucidation of these and other physical and chemical processes.

  14. Evaluation of nursing practice: process and critique.

    PubMed

    Braunstein, M S

    1998-01-01

    This article describes the difficulties in conducting clinical trials to evaluate nursing practice models. Suggestions are offered for strengthening the process. A clinical trial of a nursing practice model based on a synthesis of Aristotelian theory with Rogers' science is described. The rationale for decisions regarding the research procedures used in presented. Methodological limitations of the study design and the specifications of the practice model are examined. It is concluded that clear specification of theoretical relationships within a practice model and clear identification of key intervening variables will enable researchers to better connect the treatment with the outcome.

  15. Identity-Based Motivation: Constraints and Opportunities in Consumer Research

    PubMed Central

    Shavitt, Sharon; Torelli, Carlos J.; Wong, Jimmy

    2009-01-01

    This commentary underscores the integrative nature of the identity-based motivation model (Oyserman, 2009). We situate the model within existing literatures in psychology and consumer behavior, and illustrate its novel elements with research examples. Special attention is devoted to, 1) how product- and brand-based affordances constrain identity-based motivation processes and, 2) the mindsets and action tendencies that can be triggered by specific cultural identities in pursuit of consumer goals. Future opportunities are suggested for researching the antecedents of product meanings and relevant identities. PMID:20161045

  16. Ensuring congruency in multiscale modeling: towards linking agent based and continuum biomechanical models of arterial adaptation.

    PubMed

    Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D

    2011-11-01

    There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.

  17. Simple model of inhibition of chain-branching combustion processes

    NASA Astrophysics Data System (ADS)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  18. Evaluating and improving count-based population inference: A case study from 31 years of monitoring Sandhill Cranes

    USGS Publications Warehouse

    Gerber, Brian D.; Kendall, William L.

    2017-01-01

    Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4) forecasting population size.

  19. Exploring the Effect of Embedded Scaffolding Within Curricular Tasks on Third-Grade Students' Model-Based Explanations about Hydrologic Cycling

    NASA Astrophysics Data System (ADS)

    Zangori, Laura; Forbes, Cory T.; Schwarz, Christina V.

    2015-10-01

    Opportunities to generate model-based explanations are crucial for elementary students, yet are rarely foregrounded in elementary science learning environments despite evidence that early learners can reason from models when provided with scaffolding. We used a quasi-experimental research design to investigate the comparative impact of a scaffold test condition consisting of embedded physical scaffolds within a curricular modeling task on third-grade (age 8-9) students' formulation of model-based explanations for the water cycle. This condition was contrasted to the control condition where third-grade students used a curricular modeling task with no embedded physical scaffolds. Students from each condition ( n scaffold = 60; n unscaffold = 56) generated models of the water cycle before and after completion of a 10-week water unit. Results from quantitative analyses suggest that students in the scaffolded condition represented and linked more subsurface water process sequences with surface water process sequences than did students in the unscaffolded condition. However, results of qualitative analyses indicate that students in the scaffolded condition were less likely to build upon these process sequences to generate model-based explanations and experienced difficulties understanding their models as abstracted representations rather than recreations of real-world phenomena. We conclude that embedded curricular scaffolds may support students to consider non-observable components of the water cycle but, alone, may be insufficient for generation of model-based explanations about subsurface water movement.

  20. The Role of Light in the Emergence of Weeds: Using Camelina microcarpa as an Example.

    PubMed

    Royo-Esnal, Aritz; Gesch, Russell W; Forcella, Frank; Torra, Joel; Recasens, Jordi; Necajeva, Jevgenija

    2015-01-01

    When modelling the emergence of weeds, two main factors are considered that condition this process: temperature and soil moisture. Optimum temperature is necessary for metabolic processes that generate energy for growth, while turgor pressure is necessary for root and shoot elongation which eventually leads to seedling emergence from the soil. Most emergence models do not usually consider light as a residual factor, but it could have an important role as it can alter directly or indirectly the dormancy and germination of seeds. In this paper, inclusion of light as an additional factor to photoperiod and radiation in emergence models is explored and compared with the classical hydrothermal time (HTT) model using Camelina microcarpa as an example. HTT based on hourly estimates is also compared with that based on daily estimates. Results suggest that, although HTT based models are accurate enough for local applications, the precision of these models is improved when HTT is estimated hourly and solar radiation is included as a factor.

  1. Representation of People's Decisions in Health Information Systems. A Complementary Approach for Understanding Health Care Systems and Population Health.

    PubMed

    Gonzalez Bernaldo de Quiros, Fernan; Dawidowski, Adriana R; Figar, Silvana

    2017-02-01

    In this study, we aimed: 1) to conceptualize the theoretical challenges facing health information systems (HIS) to represent patients' decisions about health and medical treatments in everyday life; 2) to suggest approaches for modeling these processes. The conceptualization of the theoretical and methodological challenges was discussed in 2015 during a series of interdisciplinary meetings attended by health informatics staff, epidemiologists and health professionals working in quality management and primary and secondary prevention of chronic diseases of the Hospital Italiano de Buenos Aires, together with sociologists, anthropologists and e-health stakeholders. HIS are facing the need and challenge to represent social human processes based on constructivist and complexity theories, which are the current frameworks of human sciences for understanding human learning and socio-cultural changes. Computer systems based on these theories can model processes of social construction of concrete and subjective entities and the interrelationships between them. These theories could be implemented, among other ways, through the mapping of health assets, analysis of social impact through community trials and modeling of complexity with system simulation tools. This analysis suggested the need to complement the traditional linear causal explanations of disease onset (and treatments) that are the bases for models of analysis of HIS with constructivist and complexity frameworks. Both may enlighten the complex interrelationships among patients, health services and the health system. The aim of this strategy is to clarify people's decision making processes to improve the efficiency, quality and equity of the health services and the health system.

  2. Mechanisms of placebo analgesia: A dual-process model informed by insights from cross-species comparisons.

    PubMed

    Schafer, Scott M; Geuter, Stephan; Wager, Tor D

    2018-01-01

    Placebo treatments are pharmacologically inert, but are known to alleviate symptoms across a variety of clinical conditions. Associative learning and cognitive expectations both play important roles in placebo responses, however we are just beginning to understand how interactions between these processes lead to powerful effects. Here, we review the psychological principles underlying placebo effects and our current understanding of their brain bases, focusing on studies demonstrating both the importance of cognitive expectations and those that demonstrate expectancy-independent associative learning. To account for both forms of placebo analgesia, we propose a dual-process model in which flexible, contextually driven cognitive schemas and attributions guide associative learning processes that produce stable, long-term placebo effects. According to this model, the placebo-induction paradigms with the most powerful effects are those that combine reinforcement (e.g., the experience of reduced pain after placebo treatment) with suggestions and context cues that disambiguate learning by attributing perceived benefit to the placebo. Using this model as a conceptual scaffold, we review and compare neurobiological systems identified in both human studies of placebo analgesia and behavioral pain modulation in rodents. We identify substantial overlap between the circuits involved in human placebo analgesia and those that mediate multiple forms of context-based modulation of pain behavior in rodents, including forebrain-brainstem pathways and opioid and cannabinoid systems in particular. This overlap suggests that placebo effects are part of a set of adaptive mechanisms for shaping nociceptive signaling based on its information value and anticipated optimal response in a given behavioral context. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A Collective Case Study of Secondary Students' Model-Based Inquiry on Natural Selection through Programming in an Agent-Based Modeling Environment

    NASA Astrophysics Data System (ADS)

    Xiang, Lin

    This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8 th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on natural selection implemented in a charter school of a major California city during spring semester of 2009. Eight 8th grade students, two boys and six girls, participated in this study. All of them were low socioeconomic status (SES). English was a second language for all of them, but they had been identified as fluent English speakers at least a year before the study. None of them had learned either natural selection or programming before the study. The study spanned over 7 weeks and was comprised of two study phases. In phase one the subject students learned natural selection in science classroom and how to do programming in NetLogo, an ABPM tool, in a computer lab; in phase two, the subject students were asked to program a simulation of adaptation based on the natural selection model in NetLogo. Both qualitative and quantitative data were collected in this study. The data resources included (1) pre and post test questionnaire, (2) student in-class worksheet, (3) programming planning sheet, (4) code-conception matching sheet, (5) student NetLogo projects, (6) videotaped programming processes, (7) final interview, and (8) investigator's field notes. Both qualitative and quantitative approaches were applied to analyze the gathered data. The findings suggested that students made progress on understanding adaptation phenomena and natural selection at the end of ABPM-supported MBI learning but the progress was limited. These students still held some misconceptions in their conceptual models, such as the idea that animals need to "learn" to adapt into the environment. Besides, their models of natural selection appeared to be incomplete and many relationships among the model ideas had not been well established by the end of the study. Most of them did not treat the natural selection model as a whole but only focused on some ideas within the model. Very few of them could scientifically apply the natural selection model to interpret other evolutionary phenomena. The findings about participating students' programming processes revealed these processes were composed of consecutive programming cycles. The cycle typically included posing a task, constructing and running program codes, and examining the resulting simulation. Students held multiple ideas and applied various programming strategies in these cycles. Students were involved in MBI at each step of a cycle. Three types of ideas, six programming strategies and ten MBI actions were identified out of the processes. The relationships among these ideas, strategies and actions were also identified and described. Findings suggested that ABPM activities could support MBI by (1) exposing students' personal models and understandings, (2) provoking and supporting a series of model-based inquiry activities, such as elaborating target phenomena, abstracting patterns, and revising conceptual models, and (3) provoking and supporting tangible and productive conversations among students, as well as between the instructor and students. Findings also revealed three programming behaviors that appeared to impede productive MBI, including (1) solely phenomenon-orientated programming, (2) transplanting program codes, and (3) blindly running procedures. Based on the findings, I propose a general modeling process in ABPM activities, summarize the ways in which MBI can be supported in ABPM activities and constrained by multiple factors, and suggest the implications of this study in the future ABPM-assisted science instructional design and research.

  4. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  5. The role of emotion and emotion regulation in social anxiety disorder.

    PubMed

    Jazaieri, Hooria; Morrison, Amanda S; Goldin, Philippe R; Gross, James J

    2015-01-01

    Many psychiatric disorders involve problematic patterns of emotional reactivity and regulation. In this review, we consider recent findings regarding emotion and emotion regulation in the context of social anxiety disorder (SAD). We first describe key features of SAD which suggest altered emotional and self-related processing difficulties. Next, we lay the conceptual foundation for a discussion of emotion and emotion regulation and present a common framework for understanding emotion regulation, the process model of emotion regulation. Using the process model, we evaluate the recent empirical literature spanning self-report, observational, behavioral, and physiological methods across five specific families of emotion regulation processes-situation selection, situation modification, attentional deployment, cognitive change, and response modulation. Next, we examine the empirical evidence behind two psychosocial interventions for SAD: cognitive behavioral therapy (CBT) and mindfulness-based stress reduction (MBSR). Throughout, we present suggestions for future directions in the continued examination of emotion and emotion regulation in SAD.

  6. Adopting and Teaching Evidence-Based Practice in Master's-Level Social Work Programs

    ERIC Educational Resources Information Center

    Drake, Brett; Hovmand, Peter; Jonson-Reid, Melissa; Zayas, Luis H.

    2007-01-01

    This article makes specific suggestions for teaching evidence-based practice (EBP) in the master's-in-social-work (MSW) curriculum. The authors use the model of EBP as it was originally conceived: a process for posing empirically answerable questions, finding and evaluating the best available evidence, and applying that evidence in conjunction…

  7. Testing process predictions of models of risky choice: a quantitative model comparison approach

    PubMed Central

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  8. Testing a Dual Process Model of Gender-Based Violence: A Laboratory Examination.

    PubMed

    Berke, Danielle S; Zeichner, Amos

    2016-01-01

    The dire impact of gender-based violence on society compels development of models comprehensive enough to capture the diversity of its forms. Research has established hostile sexism (HS) as a robust predictor of gender-based violence. However, to date, research has yet to link men's benevolent sexism (BS) to physical aggression toward women, despite correlations between BS and HS and between BS and victim blaming. One model, the opposing process model of benevolent sexism (Sibley & Perry, 2010), suggests that, for men, BS acts indirectly through HS to predict acceptance of hierarchy-enhancing social policy as an expression of a preference for in-group dominance (i. e., social dominance orientation [SDO]). The extent to which this model applies to gender-based violence remains untested. Therefore, in this study, 168 undergraduate men in a U. S. university participated in a competitive reaction time task, during which they had the option to shock an ostensible female opponent as a measure of gender-based violence. Results of multiple-mediation path analyses indicated dual pathways potentiating gender-based violence and highlight SDO as a particularly potent mechanism of this violence. Findings are discussed in terms of group dynamics and norm-based violence prevention.

  9. Phylogenetic analyses suggest that diversification and body size evolution are independent in insects.

    PubMed

    Rainford, James L; Hofreiter, Michael; Mayhew, Peter J

    2016-01-08

    Skewed body size distributions and the high relative richness of small-bodied taxa are a fundamental property of a wide range of animal clades. The evolutionary processes responsible for generating these distributions are well described in vertebrate model systems but have yet to be explored in detail for other major terrestrial clades. In this study, we explore the macro-evolutionary patterns of body size variation across families of Hexapoda (insects and their close relatives), using recent advances in phylogenetic understanding, with an aim to investigate the link between size and diversity within this ancient and highly diverse lineage. The maximum, minimum and mean-log body lengths of hexapod families are all approximately log-normally distributed, consistent with previous studies at lower taxonomic levels, and contrasting with skewed distributions typical of vertebrate groups. After taking phylogeny and within-tip variation into account, we find no evidence for a negative relationship between diversification rate and body size, suggesting decoupling of the forces controlling these two traits. Likelihood-based modeling of the log-mean body size identifies distinct processes operating within Holometabola and Diptera compared with other hexapod groups, consistent with accelerating rates of size evolution within these clades, while as a whole, hexapod body size evolution is found to be dominated by neutral processes including significant phylogenetic conservatism. Based on our findings we suggest that the use of models derived from well-studied but atypical clades, such as vertebrates may lead to misleading conclusions when applied to other major terrestrial lineages. Our results indicate that within hexapods, and within the limits of current systematic and phylogenetic knowledge, insect diversification is generally unfettered by size-biased macro-evolutionary processes, and that these processes over large timescales tend to converge on apparently neutral evolutionary processes. We also identify limitations on available data within the clade and modeling approaches for the resolution of trees of higher taxa, the resolution of which may collectively enhance our understanding of this key component of terrestrial ecosystems.

  10. Models and theories of prescribing decisions: A review and suggested a new model

    PubMed Central

    Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the ‘persuasion theory - elaboration likelihood model’, the stimuli–response marketing model’, the ‘agency theory’, the theory of planned behaviour,’ and ‘social power theory,’ in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research. PMID:28690701

  11. Hierarchical species distribution models

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  12. Modelling methane emissions from natural wetlands by development and application of the TRIPLEX-GHG model

    USGS Publications Warehouse

    Zhu, Qing; Liu, Jinxun; Peng, C.; Chen, H.; Fang, X.; Jiang, H.; Yang, G.; Zhu, D.; Wang, W.; Zhou, X.

    2014-01-01

    A new process-based model TRIPLEX-GHG was developed based on the Integrated Biosphere Simulator (IBIS), coupled with a new methane (CH4) biogeochemistry module (incorporating CH4 production, oxidation, and transportation processes) and a water table module to investigate CH4 emission processes and dynamics that occur in natural wetlands. Sensitivity analysis indicates that the most sensitive parameters to evaluate CH4 emission processes from wetlands are r (defined as the CH4 to CO2 release ratio) and Q10 in the CH4 production process. These two parameters were subsequently calibrated to data obtained from 19 sites collected from approximately 35 studies across different wetlands globally. Being heterogeneously spatially distributed, r ranged from 0.1 to 0.7 with a mean value of 0.23, and the Q10 for CH4 production ranged from 1.6 to 4.5 with a mean value of 2.48. The model performed well when simulating magnitude and capturing temporal patterns in CH4 emissions from natural wetlands. Results suggest that the model is able to be applied to different wetlands under varying conditions and is also applicable for global-scale simulations.

  13. Process-based, morphodynamic hindcast of decadal deposition patterns in San Pablo Bay, California, 1856-1887

    USGS Publications Warehouse

    van der Wegen, M.; Jaffe, B.E.; Roelvink, J.A.

    2011-01-01

    This study investigates the possibility of hindcasting-observed decadal-scale morphologic change in San Pablo Bay, a subembayment of the San Francisco Estuary, California, USA, by means of a 3-D numerical model (Delft3D). The hindcast period, 1856-1887, is characterized by upstream hydraulic mining that resulted in a high sediment input to the estuary. The model includes wind waves, salt water and fresh water interactions, and graded sediment transport, among others. Simplified initial conditions and hydrodynamic forcing were necessary because detailed historic descriptions were lacking. Model results show significant skill. The river discharge and sediment concentration have a strong positive influence on deposition volumes. Waves decrease deposition rates and have, together with tidal movement, the greatest effect on sediment distribution within San Pablo Bay. The applied process-based (or reductionist) modeling approach is valuable once reasonable values for model parameters and hydrodynamic forcing are obtained. Sensitivity analysis reveals the dominant forcing of the system and suggests that the model planform plays a dominant role in the morphodynamic development. A detailed physical explanation of the model outcomes is difficult because of the high nonlinearity of the processes. Process formulation refinement, a more detailed description of the forcing, or further model parameter variations may lead to an enhanced model performance, albeit to a limited extent. The approach potentially provides a sound basis for prediction of future developments. Parallel use of highly schematized box models and a process-based approach as described in the present work is probably the most valuable method to assess decadal morphodynamic development. Copyright ?? 2011 by the American Geophysical Union.

  14. Generalised additive modelling approach to the fermentation process of glutamate.

    PubMed

    Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping

    2011-03-01

    In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  15. A risk-based auditing process for pharmaceutical manufacturers.

    PubMed

    Vargo, Susan; Dana, Bob; Rangavajhula, Vijaya; Rönninger, Stephan

    2014-01-01

    The purpose of this article is to share ideas on developing a risk-based model for the scheduling of audits (both internal and external). Audits are a key element of a manufacturer's quality system and provide an independent means of evaluating the manufacturer's or the supplier/vendor's compliance status. Suggestions for risk-based scheduling approaches are discussed in the article. Pharmaceutical manufacturers are required to establish and implement a quality system. The quality system is an organizational structure defining responsibilities, procedures, processes, and resources that the manufacturer has established to ensure quality throughout the manufacturing process. Audits are a component of the manufacturer's quality system and provide a systematic and an independent means of evaluating the manufacturer's overall quality system and compliance status. Audits are performed at defined intervals for a specified duration. The intention of the audit process is to focus on key areas within the quality system and may not cover all relevant areas during each audit. In this article, the authors provide suggestions for risk-based scheduling approaches to aid pharmaceutical manufacturers in identifying the key focus areas for an audit.

  16. An approach to developing an integrated pyroprocessing simulator

    NASA Astrophysics Data System (ADS)

    Lee, Hyo Jik; Ko, Won Il; Choi, Sung Yeol; Kim, Sung Ki; Kim, In Tae; Lee, Han Soo

    2014-02-01

    Pyroprocessing has been studied for a decade as one of the promising fuel recycling options in Korea. We have built a pyroprocessing integrated inactive demonstration facility (PRIDE) to assess the feasibility of integrated pyroprocessing technology and scale-up issues of the processing equipment. Even though such facility cannot be replaced with a real integrated facility using spent nuclear fuel (SF), many insights can be obtained in terms of the world's largest integrated pyroprocessing operation. In order to complement or overcome such limited test-based research, a pyroprocessing Modelling and simulation study began in 2011. The Korea Atomic Energy Research Institute (KAERI) suggested a Modelling architecture for the development of a multi-purpose pyroprocessing simulator consisting of three-tiered models: unit process, operation, and plant-level-model. The unit process model can be addressed using governing equations or empirical equations as a continuous system (CS). In contrast, the operation model describes the operational behaviors as a discrete event system (DES). The plant-level model is an integrated model of the unit process and an operation model with various analysis modules. An interface with different systems, the incorporation of different codes, a process-centered database design, and a dynamic material flow are discussed as necessary components for building a framework of the plant-level model. As a sample model that contains methods decoding the above engineering issues was thoroughly reviewed, the architecture for building the plant-level-model was verified. By analyzing a process and operation-combined model, we showed that the suggested approach is effective for comprehensively understanding an integrated dynamic material flow. This paper addressed the current status of the pyroprocessing Modelling and simulation activity at KAERI, and also predicted its path forward.

  17. The adaptive safety analysis and monitoring system

    NASA Astrophysics Data System (ADS)

    Tu, Haiying; Allanach, Jeffrey; Singh, Satnam; Pattipati, Krishna R.; Willett, Peter

    2004-09-01

    The Adaptive Safety Analysis and Monitoring (ASAM) system is a hybrid model-based software tool for assisting intelligence analysts to identify terrorist threats, to predict possible evolution of the terrorist activities, and to suggest strategies for countering terrorism. The ASAM system provides a distributed processing structure for gathering, sharing, understanding, and using information to assess and predict terrorist network states. In combination with counter-terrorist network models, it can also suggest feasible actions to inhibit potential terrorist threats. In this paper, we will introduce the architecture of the ASAM system, and discuss the hybrid modeling approach embedded in it, viz., Hidden Markov Models (HMMs) to detect and provide soft evidence on the states of terrorist network nodes based on partial and imperfect observations, and Bayesian networks (BNs) to integrate soft evidence from multiple HMMs. The functionality of the ASAM system is illustrated by way of application to the Indian Airlines Hijacking, as modeled from open sources.

  18. Processes of aggression described by kinetic method

    NASA Astrophysics Data System (ADS)

    Aristov, V. V.; Ilyin, O.

    2014-12-01

    In the last decades many investigations have been devoted to theoretical models in new areas concerning description of different biological, sociological and historical processes. In the present paper we suggest a model of the Nazi Germany invasion of Poland, France and USSR based on the kinetic theory. We model this process with the Cauchy boundary problem for the two-element kinetic equations with spatial initial conditions. The solution of the problem is given in the form of traveling wave. The propagation velocity of a frontline depends on the quotient between initial forces concentrations. Moreover it is obtained that the general solution of the model can be expressed in terms of quadratures and elementary functions. Finally it is shown that the frontline velocities are complied with the historical data.

  19. Comparison of Modeling Approaches for Carbon Partitioning: Impact on Estimates of Global Net Primary Production and Equilibrium Biomass of Woody Vegetation from MODIS GPP

    NASA Astrophysics Data System (ADS)

    Ise, T.; Litton, C. M.; Giardina, C. P.; Ito, A.

    2009-12-01

    Plant partitioning of carbon (C) to above- vs. belowground, to growth vs. respiration, and to short vs. long lived tissues exerts a large influence on ecosystem structure and function with implications for the global C budget. Importantly, outcomes of process-based terrestrial vegetation models are likely to vary substantially with different C partitioning algorithms. However, controls on C partitioning patterns remain poorly quantified, and studies have yielded variable, and at times contradictory, results. A recent meta-analysis of forest studies suggests that the ratio of net primary production (NPP) and gross primary production (GPP) is fairly conservative across large scales. To illustrate the effect of this unique meta-analysis-based partitioning scheme (MPS), we compared an application of MPS to a terrestrial satellite-based (MODIS) GPP to estimate NPP vs. two global process-based vegetation models (Biome-BGC and VISIT) to examine the influence of C partitioning on C budgets of woody plants. Due to the temperature dependence of maintenance respiration, NPP/GPP predicted by the process-based models increased with latitude while the ratio remained constant with MPS. Overall, global NPP estimated with MPS was 17 and 27% lower than the process-based models for temperate and boreal biomes, respectively, with smaller differences in the tropics. Global equilibrium biomass of woody plants was then calculated from the NPP estimates and tissue turnover rates from VISIT. Since turnover rates differed greatly across tissue types (i.e., metabolically active vs. structural), global equilibrium biomass estimates were sensitive to the partitioning scheme employed. The MPS estimate of global woody biomass was 7-21% lower than that of the process-based models. In summary, we found that model output for NPP and equilibrium biomass was quite sensitive to the choice of C partitioning schemes. Carbon use efficiency (CUE; NPP/GPP) by forest biome and the globe. Values are means for 2001-2006.

  20. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform

    PubMed Central

    Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979

  1. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.

    PubMed

    Wu, Hau-Tieng; Wu, Han-Kuei; Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features.

  2. Anti-gravity with present technology - Implementation and theoretical foundation

    NASA Astrophysics Data System (ADS)

    Alzofon, F. E.

    1981-07-01

    This paper proposes a semi-empirical model of the processes leading to the gravitational field based on accepted features of subatomic processes. Through an analogy with methods of cryogenics, a method of decreasing (or increasing) the gravitational force on a vehicle, using presently-known technology, is suggested. Various ways of ultilizing this effect in vehicle propulsion are described. A unified field theory is then detailed which provides a more formal foundation for the gravitational field model first introduced. In distinction to the general theory of relativity, it features physical processes which generate the gravitational field.

  3. Synchrony and motor mimicking in chimpanzee observational learning

    PubMed Central

    Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew

    2014-01-01

    Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function. PMID:24923651

  4. Synchrony and motor mimicking in chimpanzee observational learning.

    PubMed

    Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew

    2014-06-13

    Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

  5. Activated sludge model (ASM) based modelling of membrane bioreactor (MBR) processes: a critical review with special regard to MBR specificities.

    PubMed

    Fenu, A; Guglielmi, G; Jimenez, J; Spèrandio, M; Saroj, D; Lesjean, B; Brepols, C; Thoeye, C; Nopens, I

    2010-08-01

    Membrane bioreactors (MBRs) have been increasingly employed for municipal and industrial wastewater treatment in the last decade. The efforts for modelling of such wastewater treatment systems have always targeted either the biological processes (treatment quality target) as well as the various aspects of engineering (cost effective design and operation). The development of Activated Sludge Models (ASM) was an important evolution in the modelling of Conventional Activated Sludge (CAS) processes and their use is now very well established. However, although they were initially developed to describe CAS processes, they have simply been transferred and applied to MBR processes. Recent studies on MBR biological processes have reported several crucial specificities: medium to very high sludge retention times, high mixed liquor concentration, accumulation of soluble microbial products (SMP) rejected by the membrane filtration step, and high aeration rates for scouring purposes. These aspects raise the question as to what extent the ASM framework is applicable to MBR processes. Several studies highlighting some of the aforementioned issues are scattered through the literature. Hence, through a concise and structured overview of the past developments and current state-of-the-art in biological modelling of MBR, this review explores ASM-based modelling applied to MBR processes. The work aims to synthesize previous studies and differentiates between unmodified and modified applications of ASM to MBR. Particular emphasis is placed on influent fractionation, biokinetics, and soluble microbial products (SMPs)/exo-polymeric substances (EPS) modelling, and suggestions are put forward as to good modelling practice with regard to MBR modelling both for end-users and academia. A last section highlights shortcomings and future needs for improved biological modelling of MBR processes. (c) 2010 Elsevier Ltd. All rights reserved.

  6. Facial affect processing and depression susceptibility: cognitive biases and cognitive neuroscience.

    PubMed

    Bistricky, Steven L; Ingram, Rick E; Atchley, Ruth Ann

    2011-11-01

    Facial affect processing is essential to social development and functioning and is particularly relevant to models of depression. Although cognitive and interpersonal theories have long described different pathways to depression, cognitive-interpersonal and evolutionary social risk models of depression focus on the interrelation of interpersonal experience, cognition, and social behavior. We therefore review the burgeoning depressive facial affect processing literature and examine its potential for integrating disciplines, theories, and research. In particular, we evaluate studies in which information processing or cognitive neuroscience paradigms were used to assess facial affect processing in depressed and depression-susceptible populations. Most studies have assessed and supported cognitive models. This research suggests that depressed and depression-vulnerable groups show abnormal facial affect interpretation, attention, and memory, although findings vary based on depression severity, comorbid anxiety, or length of time faces are viewed. Facial affect processing biases appear to correspond with distinct neural activity patterns and increased depressive emotion and thought. Biases typically emerge in depressed moods but are occasionally found in the absence of such moods. Indirect evidence suggests that childhood neglect might cultivate abnormal facial affect processing, which can impede social functioning in ways consistent with cognitive-interpersonal and interpersonal models. However, reviewed studies provide mixed support for the social risk model prediction that depressive states prompt cognitive hypervigilance to social threat information. We recommend prospective interdisciplinary research examining whether facial affect processing abnormalities promote-or are promoted by-depressogenic attachment experiences, negative thinking, and social dysfunction.

  7. Intelligent Context-Aware and Adaptive Interface for Mobile LBS

    PubMed Central

    Liu, Yanhong

    2015-01-01

    Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users' demands in a complicated environment and suggested the feasibility by the experimental results. PMID:26457077

  8. The impact of media type on shared decision processes in third-age populations.

    PubMed

    Reychav, Iris; Najami, Inam; Raban, Daphne Ruth; McHaney, Roger; Azuri, Joseph

    2018-04-01

    To examine the relationship between the media, through which medical information was made available (e.g. digital versus printed), and the patients' desire to play an active part in a medical decision in an SDM or an ISDM-based process. The goal of this research was to expand knowledge concerning social and personal factors that affect and explain patients' willingness to participate in the process. A questionnaire was distributed in this empirical study of 103 third-age participants. A theoretical model formed the basis for the study and utilized a variety of factors from technology acceptance, as well as personal and environmental influences to investigate the likelihood of subjects preferring a certain decision-making approach. The research population included men and women aged 65 or older who resided in five assisted living facilities in Israel. The sample was split randomly into 2 groups. One group used digital information and the other print. A path analysis was conducted, using Structural Equation Modelling (SEM) in AMOS SPSS, to determine the influence of the information mode of presentation on the patient's choice of the SDM or ISDM model. When digital media was accessible, the information's perceived usefulness (PU) led participants to choose an ISDM-based process; this was not true with printed information. When information was available online, higher self-efficacy (SE) led participants to prefer an SDM-based process. When the information was available in print, a direct positive influence was found on the participant's choice of SDM, while a direct negative influence was found on their choice of an ISDM-based process. PU was found to be affected by external peer influences, particularly when resources were made available in print. This meant that digital resources tended to be accepted at face value more readily. Cognitive absorption had a positive effect on the research variables only when the information was available digitally. The findings suggest the use of digital information may be related to cognitive functions of older adults, since the use of digital technology and information requires more cognitive effort. The study illustrates factors that make patients choose SDM or ISDM-based processes in third-age populations. In general, the results suggest that, even though a physician may attempt to place the patient in the center of the decision process, printed information does not empower the patient in the same way that digital resources do. This may have wider ramifications if the patient does not buy into the treatment plan is and becomes less motivated to be compliant with the treatment. Another key contribution of this research is to identify processes that reflect information assessment and adoptions, and the behaviors related to medical decision making, both as a model and as a process. This study suggests what health care professionals should expect to see as the transition to more digital information sources becomes the norm among the elderly population. Future research is needed to examine this model under different conditions, and to check for other variables and mechanisms perceived as mediators in the choice of SDM or ISDM processes. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. From creatures of habit to goal-directed learners: Tracking the developmental emergence of model-based reinforcement learning

    PubMed Central

    Decker, Johannes H.; Otto, A. Ross; Daw, Nathaniel D.; Hartley, Catherine A.

    2016-01-01

    Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults, performed a sequential reinforcement-learning task that enables estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was evident in choice behavior across all age groups, evidence of a model-based strategy only emerged during adolescence and continued to increase into adulthood. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior. PMID:27084852

  10. Integrating psychological and neurobiological considerations regarding the development and maintenance of specific Internet-use disorders: An Interaction of Person-Affect-Cognition-Execution (I-PACE) model.

    PubMed

    Brand, Matthias; Young, Kimberly S; Laier, Christian; Wölfling, Klaus; Potenza, Marc N

    2016-12-01

    Within the last two decades, many studies have addressed the clinical phenomenon of Internet-use disorders, with a particular focus on Internet-gaming disorder. Based on previous theoretical considerations and empirical findings, we suggest an Interaction of Person-Affect-Cognition-Execution (I-PACE) model of specific Internet-use disorders. The I-PACE model is a theoretical framework for the processes underlying the development and maintenance of an addictive use of certain Internet applications or sites promoting gaming, gambling, pornography viewing, shopping, or communication. The model is composed as a process model. Specific Internet-use disorders are considered to be the consequence of interactions between predisposing factors, such as neurobiological and psychological constitutions, moderators, such as coping styles and Internet-related cognitive biases, and mediators, such as affective and cognitive responses to situational triggers in combination with reduced executive functioning. Conditioning processes may strengthen these associations within an addiction process. Although the hypotheses regarding the mechanisms underlying the development and maintenance of specific Internet-use disorders, summarized in the I-PACE model, must be further tested empirically, implications for treatment interventions are suggested. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function

    PubMed Central

    Groenendyk, Derek G.; Ferré, Ty P.A.; Thorp, Kelly R.; Rice, Amy K.

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth’s surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape function, suggest that hydrologic-process-based classifications should be incorporated into environmental process models and can be used to define application-specific maps of hydrologic function. PMID:26121466

  12. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function.

    PubMed

    Groenendyk, Derek G; Ferré, Ty P A; Thorp, Kelly R; Rice, Amy K

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth's surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape function, suggest that hydrologic-process-based classifications should be incorporated into environmental process models and can be used to define application-specific maps of hydrologic function.

  13. Numerical Modeling of Unsteady Thermofluid Dynamics in Cryogenic Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    2003-01-01

    A finite volume based network analysis procedure has been applied to model unsteady flow without and with heat transfer. Liquid has been modeled as compressible fluid where the compressibility factor is computed from the equation of state for a real fluid. The modeling approach recognizes that the pressure oscillation is linked with the variation of the compressibility factor; therefore, the speed of sound does not explicitly appear in the governing equations. The numerical results of chilldown process also suggest that the flow and heat transfer are strongly coupled. This is evident by observing that the mass flow rate during 90-second chilldown process increases by factor of ten.

  14. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, S.; Shelef, E.; Hilley, G. E.

    2013-12-01

    The Washington Cascades is currently in topographic and erosional disequilibrium after deglaciation occurred around 11- 17 ka ago. The topography still shows the features inherited from prior alpine glacial processes (e.g., cirques, steep side-valleys, and flat valley bottoms), though postglacial processes are currently denuding this landscape. Our previous study in this area calculated the thousand-year-timescale denudation rates using cosmogenic 10Be concentration (CRN-denudation rates), and showed that they were ~ four times higher than million-year-timescale uplift rates. In addition, the spatial distribution of denudation rates showed a good correlation with a factor-of-ten variation in precipitation. We interpreted this correlation as reflecting the sensitivity of landslide triggering in over-steepened deglaciated topography to precipitation, which produced high denudation rates in wet areas that experienced frequent landsliding. We explored this interpretation using a model of postglacial surface processes that predicts the evolution of the topography and denudation rates within the deglaciated Washington Cascades. Specifically, we used the model to understand the controls on and timescales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically-based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-timescale denudation rates measured from cosmogenic 10Be isotopes. The probability distribution of model parameters required to fit the observed denudation rates shows comparable ranges from previous studies in similar rock types and climatic conditions. The calibrated parameters suggest that the dominant sediment source of river sediments originates from stochastic landslides. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), while their spatial distribution is largely controlled by precipitation and slope angles. Simulation results show that denudation rates decay over time and take approximately 130-180 ka to reach steady-state rates. This response timescale is longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may prevent these types of landscapes from reaching a dynamic equilibrium with postglacial processes.

  15. Observational clues to the energy release process in impulsive solar bursts

    NASA Technical Reports Server (NTRS)

    Batchelor, David

    1990-01-01

    The nature of the energy release process that produces impulsive bursts of hard X-rays and microwaves during solar flares is discussed, based on new evidence obtained using the method of Crannell et al. (1978). It is shown that the hard X-ray spectral index gamma is negatively correlated with the microwave peak frequency, suggesting a common source for the microwaves and X-rays. The thermal and nonthermal models are compared. It is found that the most straightforward explanations for burst time behavior are shock-wave particle acceleration in the nonthermal model and thermal conduction fronts in the thermal model.

  16. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  17. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  18. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  19. Simulating Single Word Processing in the Classic Aphasia Syndromes Based on the Wernicke-Lichtheim-Geschwind Theory

    ERIC Educational Resources Information Center

    Weems, Scott A.; Reggia, James A.

    2006-01-01

    The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG…

  20. A UML approach to process modelling of clinical practice guidelines for enactment.

    PubMed

    Knape, T; Hederman, L; Wade, V P; Gargan, M; Harris, C; Rahman, Y

    2003-01-01

    Although clinical practice guidelines (CPGs) have been suggested as a means of encapsulating best practice in evidence-based medical treatment, their usage in clinical environments has been disappointing. Criticisms of guideline representations have been that they are predominantly narrative and are difficult to incorporate into clinical information systems. This paper analyses the use of UML process modelling techniques for guideline representation and proposes the automated generation of executable guidelines using XMI. This hybrid UML-XMI approach provides flexible authoring of guideline decision and control structures whilst integrating appropriate data flow. It also uses an open XMI standard interface to allow the use of authoring tools and process control systems from multiple vendors. The paper first surveys CPG modelling formalisms followed by a brief introduction to process modelling in UMI. Furthermore, the modelling of CPGs in UML is presented leading to a case study of encoding a diabetes mellitus CPG using UML.

  1. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  2. The Partisan Brain: An Identity-Based Model of Political Belief.

    PubMed

    Van Bavel, Jay J; Pereira, Andrea

    2018-03-01

    Democracies assume accurate knowledge by the populace, but the human attraction to fake and untrustworthy news poses a serious problem for healthy democratic functioning. We articulate why and how identification with political parties - known as partisanship - can bias information processing in the human brain. There is extensive evidence that people engage in motivated political reasoning, but recent research suggests that partisanship can alter memory, implicit evaluation, and even perceptual judgments. We propose an identity-based model of belief for understanding the influence of partisanship on these cognitive processes. This framework helps to explain why people place party loyalty over policy, and even over truth. Finally, we discuss strategies for de-biasing information processing to help to create a shared reality across partisan divides. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. A new computational growth model for sea urchin skeletons.

    PubMed

    Zachos, Louis G

    2009-08-07

    A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era.

  4. The Pioneering Work of Enrico Morselli (1852-1929) in Light of Modern Scientific Research on Hypnosis and Suggestion.

    PubMed

    Bartolucci, Chiara; Lombardo, Giovanni Pietro

    2017-01-01

    This article examines research on hypnosis and suggestion, starting with the nineteenth-century model proposed by Enrico Morselli (1852-1929), an illustrious Italian psychiatrist and psychologist. The authors conducted an original psychophysiological analysis of hypnosis, distancing the work from the neuropathological concept of the time and proposing a model based on a naturalistic approach to investigating mental processes. The issues investigated by Morselli, including the definition of hypnosis and analysis of specific mental processes such as attention and memory, are reviewed in light of modern research. From the view of modern neuroscientific concepts, some problems that originated in the nineteenth century still appear to be present and pose still-open questions.

  5. Development Of Educational Programs In Renewable And Alternative Energy Processing: The Case Of Russia

    NASA Astrophysics Data System (ADS)

    Svirina, Anna; Shindor, Olga; Tatmyshevsky, Konstantin

    2014-12-01

    The paper deals with the main problems of Russian energy system development that proves necessary to provide educational programs in the field of renewable and alternative energy. In the paper the process of curricula development and defining teaching techniques on the basis of expert opinion evaluation is defined, and the competence model for renewable and alternative energy processing master students is suggested. On the basis of a distributed questionnaire and in-depth interviews, the data for statistical analysis was obtained. On the basis of this data, an optimization of curricula structure was performed, and three models of a structure for optimizing teaching techniques were developed. The suggested educational program structure which was adopted by employers is presented in the paper. The findings include quantitatively estimated importance of systemic thinking and professional skills and knowledge as basic competences of a masters' program graduate; statistically estimated necessity of practice-based learning approach; and optimization models for structuring curricula in renewable and alternative energy processing. These findings allow the establishment of a platform for the development of educational programs.

  6. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  7. Agent-based models in translational systems biology

    PubMed Central

    An, Gary; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram

    2013-01-01

    Effective translational methodologies for knowledge representation are needed in order to make strides against the constellation of diseases that affect the world today. These diseases are defined by their mechanistic complexity, redundancy, and nonlinearity. Translational systems biology aims to harness the power of computational simulation to streamline drug/device design, simulate clinical trials, and eventually to predict the effects of drugs on individuals. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggests that this modeling framework is well suited for translational systems biology. This review describes agent-based modeling and gives examples of its translational applications in the context of acute inflammation and wound healing. PMID:20835989

  8. A virtual maintenance-based approach for satellite assembling and troubleshooting assessment

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Li, Ying; Wang, Ranran; Wang, Zili; Lv, Chuan; Zhou, Dong

    2017-09-01

    In this study, a Virtual Maintenance (VM)-based approach for satellite troubleshooting assessment is proposed. By focusing on various elements in satellite assemble troubleshooting, such as accessibility, ergonomics, wiring, and extent of damage, a systematic, quantitative, and objective assessment model is established to decrease subjectivity in satellite assembling and troubleshooting assessment. Afterwards, based on the established assessment model and satellite virtual prototype, an application process of this model suitable for a virtual environment is presented. Finally, according to the application process, all the elements in satellite troubleshooting are analyzed and assessed. The corresponding improvements, which realize the transformation from a conventional way to a virtual simulation and assessment, are suggested, and the flaws in assembling and troubleshooting are revealed. Assembling or troubleshooting schemes can be improved in the early stage of satellite design with the help of a virtual prototype. Repetition in the practical operation is beneficial to companies as risk and cost are effectively reduced.

  9. Drawing-to-Learn: A Framework for Using Drawings to Promote Model-Based Reasoning in Biology

    PubMed Central

    Quillin, Kim; Thomas, Stephen

    2015-01-01

    The drawing of visual representations is important for learners and scientists alike, such as the drawing of models to enable visual model-based reasoning. Yet few biology instructors recognize drawing as a teachable science process skill, as reflected by its absence in the Vision and Change report’s Modeling and Simulation core competency. Further, the diffuse research on drawing can be difficult to access, synthesize, and apply to classroom practice. We have created a framework of drawing-to-learn that defines drawing, categorizes the reasons for using drawing in the biology classroom, and outlines a number of interventions that can help instructors create an environment conducive to student drawing in general and visual model-based reasoning in particular. The suggested interventions are organized to address elements of affect, visual literacy, and visual model-based reasoning, with specific examples cited for each. Further, a Blooming tool for drawing exercises is provided, as are suggestions to help instructors address possible barriers to implementing and assessing drawing-to-learn in the classroom. Overall, the goal of the framework is to increase the visibility of drawing as a skill in biology and to promote the research and implementation of best practices. PMID:25713094

  10. Two processes support visual recognition memory in rhesus monkeys.

    PubMed

    Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer

    2011-11-29

    A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans.

  11. Two processes support visual recognition memory in rhesus monkeys

    PubMed Central

    Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer

    2011-01-01

    A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans. PMID:22084079

  12. Comparing single- and dual-process models of memory development.

    PubMed

    Hayes, Brett K; Dunn, John C; Joubert, Amy; Taylor, Robert

    2017-11-01

    This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development. © 2016 John Wiley & Sons Ltd.

  13. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution.

    PubMed

    Djordjevic, Ivan B

    2015-08-24

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled.

  14. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution

    PubMed Central

    Djordjevic, Ivan B.

    2015-01-01

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled. PMID:26305258

  15. Distillation Designs for the Lunar Surface

    NASA Technical Reports Server (NTRS)

    Boul, Peter J.; Lange,Kevin E.; Conger, Bruce; Anderson, Molly

    2010-01-01

    Gravity-based distillation methods may be applied to the purification of wastewater on the lunar base. These solutions to water processing are robust physical separation techniques, which may be more advantageous than many other techniques for their simplicity in design and operation. The two techniques can be used in conjunction with each other to obtain high purity water. The components and feed compositions for modeling waste water streams are presented in conjunction with the Aspen property system for traditional stage distillation. While the individual components for each of the waste streams will vary naturally within certain bounds, an analog model for waste water processing is suggested based on typical concentration ranges for these components. Target purity levels for recycled water are determined for each individual component based on NASA s required maximum contaminant levels for potable water Optimum parameters such as reflux ratio, feed stage location, and processing rates are determined with respect to the power consumption of the process. Multistage distillation is evaluated for components in wastewater to determine the minimum number of stages necessary for each of 65 components in humidity condensate and urine wastewater mixed streams.

  16. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  17. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans

    PubMed Central

    Schreiber, Darren; Fonzo, Greg; Simmons, Alan N.; Dawes, Christopher T.; Flagan, Taru; Fowler, James H.; Paulus, Martin P.

    2013-01-01

    Liberals and conservatives exhibit different cognitive styles and converging lines of evidence suggest that biology influences differences in their political attitudes and beliefs. In particular, a recent study of young adults suggests that liberals and conservatives have significantly different brain structure, with liberals showing increased gray matter volume in the anterior cingulate cortex, and conservatives showing increased gray matter volume in the in the amygdala. Here, we explore differences in brain function in liberals and conservatives by matching publicly-available voter records to 82 subjects who performed a risk-taking task during functional imaging. Although the risk-taking behavior of Democrats (liberals) and Republicans (conservatives) did not differ, their brain activity did. Democrats showed significantly greater activity in the left insula, while Republicans showed significantly greater activity in the right amygdala. In fact, a two parameter model of partisanship based on amygdala and insula activations yields a better fitting model of partisanship than a well-established model based on parental socialization of party identification long thought to be one of the core findings of political science. These results suggest that liberals and conservatives engage different cognitive processes when they think about risk, and they support recent evidence that conservatives show greater sensitivity to threatening stimuli. PMID:23418419

  18. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    PubMed

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Bridging analytical approaches for low-carbon transitions

    NASA Astrophysics Data System (ADS)

    Geels, Frank W.; Berkhout, Frans; van Vuuren, Detlef P.

    2016-06-01

    Low-carbon transitions are long-term multi-faceted processes. Although integrated assessment models have many strengths for analysing such transitions, their mathematical representation requires a simplification of the causes, dynamics and scope of such societal transformations. We suggest that integrated assessment model-based analysis should be complemented with insights from socio-technical transition analysis and practice-based action research. We discuss the underlying assumptions, strengths and weaknesses of these three analytical approaches. We argue that full integration of these approaches is not feasible, because of foundational differences in philosophies of science and ontological assumptions. Instead, we suggest that bridging, based on sequential and interactive articulation of different approaches, may generate a more comprehensive and useful chain of assessments to support policy formation and action. We also show how these approaches address knowledge needs of different policymakers (international, national and local), relate to different dimensions of policy processes and speak to different policy-relevant criteria such as cost-effectiveness, socio-political feasibility, social acceptance and legitimacy, and flexibility. A more differentiated set of analytical approaches thus enables a more differentiated approach to climate policy making.

  20. Marine mammals' influence on ecosystem processes affecting fisheries in the Barents Sea is trivial.

    PubMed

    Corkeron, Peter J

    2009-04-23

    Some interpretations of ecosystem-based fishery management include culling marine mammals as an integral component. The current Norwegian policy on marine mammal management is one example. Scientific support for this policy includes the Scenario Barents Sea (SBS) models. These modelled interactions between cod, Gadus morhua, herring, Clupea harengus, capelin, Mallotus villosus and northern minke whales, Balaenoptera acutorostrata. Adding harp seals Phoca groenlandica into this top-down modelling approach resulted in unrealistic model outputs. Another set of models of the Barents Sea fish-fisheries system focused on interactions within and between the three fish populations, fisheries and climate. These model key processes of the system successfully. Continuing calls to support the SBS models despite their failure suggest a belief that marine mammal predation must be a problem for fisheries. The best available scientific evidence provides no justification for marine mammal culls as a primary component of an ecosystem-based approach to managing the fisheries of the Barents Sea.

  1. Evolving MCDM Applications Using Hybrid Expert-Based ISM and DEMATEL Models: An Example of Sustainable Ecotourism

    PubMed Central

    Chuang, Huan-Ming

    2013-01-01

    Ecological degradation is an escalating global threat. Increasingly, people are expressing awareness and priority for concerns about environmental problems surrounding them. Environmental protection issues are highlighted. An appropriate information technology tool, the growing popular social network system (virtual community, VC), facilitates public education and engagement with applications for existent problems effectively. Particularly, the exploration of related involvement behavior of VC member engagement is an interesting topic. Nevertheless, member engagement processes comprise interrelated sub-processes that reflect an interactive experience within VCs as well as the value co-creation model. To address the top-focused ecotourism VCs, this study presents an application of a hybrid expert-based ISM model and DEMATEL model based on multi-criteria decision making tools to investigate the complex multidimensional and dynamic nature of member engagement. Our research findings provide insightful managerial implications and suggest that the viral marketing of ecotourism protection is concerned with practitioners and academicians alike. PMID:24453902

  2. Evolving MCDM applications using hybrid expert-based ISM and DEMATEL models: an example of sustainable ecotourism.

    PubMed

    Chuang, Huan-Ming; Lin, Chien-Ku; Chen, Da-Ren; Chen, You-Shyang

    2013-01-01

    Ecological degradation is an escalating global threat. Increasingly, people are expressing awareness and priority for concerns about environmental problems surrounding them. Environmental protection issues are highlighted. An appropriate information technology tool, the growing popular social network system (virtual community, VC), facilitates public education and engagement with applications for existent problems effectively. Particularly, the exploration of related involvement behavior of VC member engagement is an interesting topic. Nevertheless, member engagement processes comprise interrelated sub-processes that reflect an interactive experience within VCs as well as the value co-creation model. To address the top-focused ecotourism VCs, this study presents an application of a hybrid expert-based ISM model and DEMATEL model based on multi-criteria decision making tools to investigate the complex multidimensional and dynamic nature of member engagement. Our research findings provide insightful managerial implications and suggest that the viral marketing of ecotourism protection is concerned with practitioners and academicians alike.

  3. Modelling the Cast Component Weight in Hot Chamber Die Casting using Combined Taguchi and Buckingham's π Approach

    NASA Astrophysics Data System (ADS)

    Singh, Rupinder

    2018-02-01

    Hot chamber (HC) die casting process is one of the most widely used commercial processes for the casting of low temperature metals and alloys. This process gives near-net shape product with high dimensional accuracy. However in actual field environment the best settings of input parameters is often conflicting as the shape and size of the casting changes and one have to trade off among various output parameters like hardness, dimensional accuracy, casting defects, microstructure etc. So for online inspection of the cast components properties (without affecting the production line) the weight measurement has been established as one of the cost effective method (as the difference in weight of sound and unsound casting reflects the possible casting defects) in field environment. In the present work at first stage the effect of three input process parameters (namely: pressure at 2nd phase in HC die casting; metal pouring temperature and die opening time) has been studied for optimizing the cast component weight `W' as output parameter in form of macro model based upon Taguchi L9 OA. After this Buckingham's π approach has been applied on Taguchi based macro model for the development of micro model. This study highlights the Taguchi-Buckingham based combined approach as a case study (for conversion of macro model into micro model) by identification of optimum levels of input parameters (based on Taguchi approach) and development of mathematical model (based on Buckingham's π approach). Finally developed mathematical model can be used for predicting W in HC die casting process with more flexibility. The results of study highlights second degree polynomial equation for predicting cast component weight in HC die casting and suggest that pressure at 2nd stage is one of the most contributing factors for controlling the casting defect/weight of casting.

  4. Factor Structure, Reliability and Measurement Invariance of the Alberta Context Tool and the Conceptual Research Utilization Scale, for German Residential Long Term Care

    PubMed Central

    Hoben, Matthias; Estabrooks, Carole A.; Squires, Janet E.; Behrens, Johann

    2016-01-01

    We translated the Canadian residential long term care versions of the Alberta Context Tool (ACT) and the Conceptual Research Utilization (CRU) Scale into German, to study the association between organizational context factors and research utilization in German nursing homes. The rigorous translation process was based on best practice guidelines for tool translation, and we previously published methods and results of this process in two papers. Both instruments are self-report questionnaires used with care providers working in nursing homes. The aim of this study was to assess the factor structure, reliability, and measurement invariance (MI) between care provider groups responding to these instruments. In a stratified random sample of 38 nursing homes in one German region (Metropolregion Rhein-Neckar), we collected questionnaires from 273 care aides, 196 regulated nurses, 152 allied health providers, 6 quality improvement specialists, 129 clinical leaders, and 65 nursing students. The factor structure was assessed using confirmatory factor models. The first model included all 10 ACT concepts. We also decided a priori to run two separate models for the scale-based and the count-based ACT concepts as suggested by the instrument developers. The fourth model included the five CRU Scale items. Reliability scores were calculated based on the parameters of the best-fitting factor models. Multiple-group confirmatory factor models were used to assess MI between provider groups. Rather than the hypothesized ten-factor structure of the ACT, confirmatory factor models suggested 13 factors. The one-factor solution of the CRU Scale was confirmed. The reliability was acceptable (>0.7 in the entire sample and in all provider groups) for 10 of 13 ACT concepts, and high (0.90–0.96) for the CRU Scale. We could demonstrate partial strong MI for both ACT models and partial strict MI for the CRU Scale. Our results suggest that the scores of the German ACT and the CRU Scale for nursing homes are acceptably reliable and valid. However, as the ACT lacked strict MI, observed variables (or scale scores based on them) cannot be compared between provider groups. Rather, group comparisons should be based on latent variable models, which consider the different residual variances of each group. PMID:27656156

  5. Terrestrial gross carbon dioxide uptake: global distribution and covariation with climate.

    PubMed

    Beer, Christian; Reichstein, Markus; Tomelleri, Enrico; Ciais, Philippe; Jung, Martin; Carvalhais, Nuno; Rödenbeck, Christian; Arain, M Altaf; Baldocchi, Dennis; Bonan, Gordon B; Bondeau, Alberte; Cescatti, Alessandro; Lasslop, Gitta; Lindroth, Anders; Lomas, Mark; Luyssaert, Sebastiaan; Margolis, Hank; Oleson, Keith W; Roupsard, Olivier; Veenendaal, Elmar; Viovy, Nicolas; Williams, Christopher; Woodward, F Ian; Papale, Dario

    2010-08-13

    Terrestrial gross primary production (GPP) is the largest global CO(2) flux driving several ecosystem functions. We provide an observation-based estimate of this flux at 123 +/- 8 petagrams of carbon per year (Pg C year(-1)) using eddy covariance flux data and various diagnostic models. Tropical forests and savannahs account for 60%. GPP over 40% of the vegetated land is associated with precipitation. State-of-the-art process-oriented biosphere models used for climate predictions exhibit a large between-model variation of GPP's latitudinal patterns and show higher spatial correlations between GPP and precipitation, suggesting the existence of missing processes or feedback mechanisms which attenuate the vegetation response to climate. Our estimates of spatially distributed GPP and its covariation with climate can help improve coupled climate-carbon cycle process models.

  6. Processes of aggression described by kinetic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aristov, V. V.; Ilyin, O.

    In the last decades many investigations have been devoted to theoretical models in new areas concerning description of different biological, sociological and historical processes. In the present paper we suggest a model of the Nazi Germany invasion of Poland, France and USSR based on the kinetic theory. We model this process with the Cauchy boundary problem for the two-element kinetic equations with spatial initial conditions. The solution of the problem is given in the form of traveling wave. The propagation velocity of a frontline depends on the quotient between initial forces concentrations. Moreover it is obtained that the general solutionmore » of the model can be expressed in terms of quadratures and elementary functions. Finally it is shown that the frontline velocities are complied with the historical data.« less

  7. Kinetic models for historical processes of fast invasion and aggression

    NASA Astrophysics Data System (ADS)

    Aristov, Vladimir V.; Ilyin, Oleg V.

    2015-04-01

    In the last few decades many investigations have been devoted to theoretical models in new areas concerning description of different biological, sociological, and historical processes. In the present paper we suggest a model of the Nazi Germany invasion of Poland, France, and the USSR based on kinetic theory. We simulate this process with the Cauchy boundary problem for two-element kinetic equations. The solution of the problem is given in the form of a traveling wave. The propagation velocity of a front line depends on the quotient between initial forces concentrations. Moreover it is obtained that the general solution of the model can be expressed in terms of quadratures and elementary functions. Finally it is shown that the front-line velocities agree with the historical data.

  8. The nearly neutral and selection theories of molecular evolution under the fisher geometrical framework: substitution rate, population size, and complexity.

    PubMed

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A

    2012-06-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population's phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models.

  9. The Nearly Neutral and Selection Theories of Molecular Evolution Under the Fisher Geometrical Framework: Substitution Rate, Population Size, and Complexity

    PubMed Central

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A.

    2012-01-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population’s phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models. PMID:22426879

  10. Modelling intelligent behavior

    NASA Technical Reports Server (NTRS)

    Green, H. S.; Triffet, T.

    1993-01-01

    An introductory discussion of the related concepts of intelligence and consciousness suggests criteria to be met in the modeling of intelligence and the development of intelligent materials. Methods for the modeling of actual structure and activity of the animal cortex have been found, based on present knowledge of the ionic and cellular constitution of the nervous system. These have led to the development of a realistic neural network model, which has been used to study the formation of memory and the process of learning. An account is given of experiments with simple materials which exhibit almost all properties of biological synapses and suggest the possibility of a new type of computer architecture to implement an advanced type of artificial intelligence.

  11. Purpose, processes, partnerships, and products: four Ps to advance participatory socio-environmental modeling

    USGS Publications Warehouse

    Gray, Steven; Voinov, Alexey; Paolisso, Michael; Jordan, Rebecca; BenDor, Todd; Bommel, Pierre; Glynn, Pierre D.; Hedelin, Beatrice; Hubacek, Klaus; Introne, Josh; Kolagani, Nagesh; Laursen, Bethany; Prell, Christina; Schmitt-Olabisi, Laura; Singer, Alison; Sterling, Eleanor J.; Zellner, Moira

    2018-01-01

    Including stakeholders in environmental model building and analysis is an increasingly popular approach to understanding ecological change. This is because stakeholders often hold valuable knowledge about socio-environmental dynamics and collaborative forms of modeling produce important boundary objects used to collectively reason about environmental problems. Although the number of participatory modeling (PM) case studies and the number of researchers adopting these approaches has grown in recent years, the lack of standardized reporting and limited reproducibility have prevented PM's establishment and advancement as a cohesive field of study. We suggest a four-dimensional framework (4P) that includes reporting on dimensions of (1) the Purpose for selecting a PM approach (the why); (2) the Process by which the public was involved in model building or evaluation (the how); (3) the Partnerships formed (the who); and (4) the Products that resulted from these efforts (the what). We highlight four case studies that use common PM software-based approaches (fuzzy cognitive mapping, agent-based modeling, system dynamics, and participatory geospatial modeling) to understand human–environment interactions and the consequences of ecological changes, including bushmeat hunting in Tanzania and Cameroon, agricultural production and deforestation in Zambia, and groundwater management in India. We demonstrate how standardizing communication about PM case studies can lead to innovation and new insights about model-based reasoning in support of ecological policy development. We suggest that our 4P framework and reporting approach provides a way for new hypotheses to be identified and tested in the growing field of PM.

  12. Purpose, processes, partnerships, and products: four Ps to advance participatory socio-environmental modeling.

    PubMed

    Gray, Steven; Voinov, Alexey; Paolisso, Michael; Jordan, Rebecca; BenDor, Todd; Bommel, Pierre; Glynn, Pierre; Hedelin, Beatrice; Hubacek, Klaus; Introne, Josh; Kolagani, Nagesh; Laursen, Bethany; Prell, Christina; Schmitt Olabisi, Laura; Singer, Alison; Sterling, Eleanor; Zellner, Moira

    2018-01-01

    Including stakeholders in environmental model building and analysis is an increasingly popular approach to understanding ecological change. This is because stakeholders often hold valuable knowledge about socio-environmental dynamics and collaborative forms of modeling produce important boundary objects used to collectively reason about environmental problems. Although the number of participatory modeling (PM) case studies and the number of researchers adopting these approaches has grown in recent years, the lack of standardized reporting and limited reproducibility have prevented PM's establishment and advancement as a cohesive field of study. We suggest a four-dimensional framework (4P) that includes reporting on dimensions of (1) the Purpose for selecting a PM approach (the why); (2) the Process by which the public was involved in model building or evaluation (the how); (3) the Partnerships formed (the who); and (4) the Products that resulted from these efforts (the what). We highlight four case studies that use common PM software-based approaches (fuzzy cognitive mapping, agent-based modeling, system dynamics, and participatory geospatial modeling) to understand human-environment interactions and the consequences of ecological changes, including bushmeat hunting in Tanzania and Cameroon, agricultural production and deforestation in Zambia, and groundwater management in India. We demonstrate how standardizing communication about PM case studies can lead to innovation and new insights about model-based reasoning in support of ecological policy development. We suggest that our 4P framework and reporting approach provides a way for new hypotheses to be identified and tested in the growing field of PM. © 2017 by the Ecological Society of America.

  13. Multilevel modeling of damage accumulation processes in metals

    NASA Astrophysics Data System (ADS)

    Kurmoiartseva, K. A.; Trusov, P. V.; Kotelnikova, N. V.

    2017-12-01

    To predict the behavior of components and constructions it is necessary to develop the methods and mathematical models which take into account the self-organization of microstructural processes and the strain localization. The damage accumulation processes and the evolution of material properties during deformation are important to take into account. The heterogeneity of the process of damage accumulation is due to the appropriate physical mechanisms at the scale levels, which are lower than the macro-level. The purpose of this work is to develop a mathematical model for analyzing the behavior of polycrystalline materials that allows describing the damage accumulation processes. Fracture is the multistage and multiscale process of the build-up of micro- and mesodefects over the wide range of loading rates. The formation of microcracks by mechanisms is caused by the interactions of the dislocations of different slip systems, barriers, boundaries and the inclusions of the secondary phase. This paper provides the description of some of the most well-known models of crack nucleation and also suggests the structure of a mathematical model based on crystal plasticity and dislocation models of crack nucleation.

  14. Computational Models of Laryngeal Aerodynamics: Potentials and Numerical Costs.

    PubMed

    Sadeghi, Hossein; Kniesburges, Stefan; Kaltenbacher, Manfred; Schützenberger, Anne; Döllinger, Michael

    2018-02-07

    Human phonation is based on the interaction between tracheal airflow and laryngeal dynamics. This fluid-structure interaction is based on the energy exchange between airflow and vocal folds. Major challenges in analyzing the phonatory process in-vivo are the small dimensions and the poor accessibility of the region of interest. For improved analysis of the phonatory process, numerical simulations of the airflow and the vocal fold dynamics have been suggested. Even though most of the models reproduced the phonatory process fairly well, development of comprehensive larynx models is still a subject of research. In the context of clinical application, physiological accuracy and computational model efficiency are of great interest. In this study, a simple numerical larynx model is introduced that incorporates the laryngeal fluid flow. It is based on a synthetic experimental model with silicone vocal folds. The degree of realism was successively increased in separate computational models and each model was simulated for 10 oscillation cycles. Results show that relevant features of the laryngeal flow field, such as glottal jet deflection, develop even when applying rather simple static models with oscillating flow rates. Including further phonatory components such as vocal fold motion, mucosal wave propagation, and ventricular folds, the simulations show phonatory key features like intraglottal flow separation and increased flow rate in presence of ventricular folds. The simulation time on 100 CPU cores ranged between 25 and 290 hours, currently restricting clinical application of these models. Nevertheless, results show high potential of numerical simulations for better understanding of phonatory process. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. Alexithymia predicts arousal-based processing deficits and discordance between emotion response systems during emotional imagery.

    PubMed

    Peasley-Miklus, Catherine E; Panayiotou, Georgia; Vrana, Scott R

    2016-03-01

    Alexithymia is believed to involve deficits in emotion processing and imagery ability. Previous findings suggest that it is especially related to deficits in processing the arousal dimension of emotion, and that discordance may exist between self-report and physiological responses to emotional stimuli in alexithymia. The current study used a well-established emotional imagery paradigm to examine emotion processing deficits and discordance in participants (N = 86) selected based on their extreme scores on the Toronto Alexithymia Scale-20. Physiological (skin conductance, heart rate, and corrugator and zygomaticus electromyographic responses) and self-report (valence, arousal ratings) responses were monitored during imagery of anger, fear, joy, and neutral scenes and emotionally neutral high arousal (action) scenes. Results from regression analyses indicated that alexithymia was largely unrelated to responses on valence-based measures (facial electromyography, valence ratings), but that it was related to arousal-based measures. Specifically, alexithymia was related to higher heart rate during neutral and lower heart rate during fear imagery. Alexithymia did not predict differential responses to action versus neutral imagery, suggesting specificity of deficits to emotional contexts. Evidence for discordance between physiological responses and self-report in alexithymia was obtained from within-person analyses using multilevel modeling. Results are consistent with the idea that alexithymic deficits are specific to processing emotional arousal, and suggest difficulties with parasympathetic control and emotion regulation. Alexithymia is also associated with discordance between self-reported emotional experience and physiological response to emotion, consistent with prior evidence. (c) 2016 APA, all rights reserved).

  16. Unravelling the Gordian knot! Key processes impacting overwintering larval survival and growth: A North Sea herring case study

    NASA Astrophysics Data System (ADS)

    Hufnagl, Marc; Peck, Myron A.; Nash, Richard D. M.; Dickey-Collas, Mark

    2015-11-01

    Unraveling the key processes affecting marine fish recruitment will ultimately require a combination of field, laboratory and modelling studies. We combined analyzes of long-term (30-year) field data on larval fish abundance, distribution and length, and biophysical model simulations of different levels of complexity to identify processes impacting the survival and growth of autumn- and winter-spawned Atlantic herring (Clupea harengus) larvae. Field survey data revealed interannual changes in intensity of utilization of the five major spawning grounds (Orkney/Shetland, Buchan, Banks north, Banks south, and Downs) as well as spatio-temporal variability in the length and abundance of overwintered larvae. The mean length of larvae captured in post-winter surveys was negatively correlated to the proportion of larvae from the southern-most (Downs) winter-spawning component. Furthermore, the mean length of larvae originating from all spawning components has decreased since 1990 suggesting ecosystem-wide changes impacting larval growth potential, most likely due to changes in prey fields. A simple biophysical model assuming temperature-dependent growth and constant mortality underestimated larval growth rates suggesting that larval mortality rates steeply declined with increasing size and/or age during winter as no match with field data could be obtained. In contrast better agreement was found between observed and modelled post-winter abundance for larvae originating from four spawning components when a more complex, physiological-based foraging and growth model was employed using a suite of potential prey field and size-based mortality scenarios. Nonetheless, agreement between field and model-derived estimates was poor for larvae originating from the winter-spawned Downs component. In North Sea herring, the dominant processes impacting larval growth and survival appear to have shifted in time and space highlighting how environmental forcing, ecosystem state and other factors can form a Gordian knot of marine fish recruitment processes. We highlight gaps in process knowledge and recommend specific field, laboratory and modelling studies which, in our opinion, are most likely to unravel the dominant processes and advance predictive capacity of the environmental regulation of recruitment in autumn and winter-spawned fishes in temperate areas such as herring in the North Sea.

  17. Large-scale scour of the sea floor and the effect of natural armouring processes, land reclamation Maasvlakte 2, port of Rotterdam

    USGS Publications Warehouse

    Boer, S.; Elias, E.; Aarninkhof, S.; Roelvink, D.; Vellinga, T.

    2007-01-01

    Morphological model computations based on uniform (non-graded) sediment revealed an unrealistically strong scour of the sea floor in the immediate vicinity to the west of Maasvlakte 2. By means of a state-of-the-art graded sediment transport model the effect of natural armouring and sorting of bed material on the scour process has been examined. Sensitivity computations confirm that the development of the scour hole is strongly reduced due to the incorporation of armouring processes, suggesting an approximately 30% decrease in terms of erosion area below the -20m depth contour. ?? 2007 ASCE.

  18. Preoperative teaching in the preadmission clinic.

    PubMed

    Posel, N

    1998-01-01

    In this article, the author proposes that instructional design be used as a foundation for a teaching model in the preadmission clinic and that the educational process be based on theories developed within the fields of health care and adult education. Furthermore, the author suggests that the process of patient education, as conducted within the preadmission setting, should necessitate an assessment of the general characteristics of the adult as a learner, of the specific characteristics of the adult as a presurgical patient, and of the unique individual cognitive processes distinctive to each patient. This information should be integrated in a new framework to create a comprehensive and personalized patient teaching model.

  19. Effects of soil freezing and thawing on vegetation carbon density in Siberia: A modeling analysis with the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ-DGVM)

    NASA Astrophysics Data System (ADS)

    Beer, C.; Lucht, W.; Gerten, D.; Thonicke, K.; Schmullius, C.

    2007-03-01

    The current latitudinal gradient in biomass suggests a climate-driven limitation of biomass in high latitudes. Understanding of the underlying processes, and quantification of their relative importance, is required to assess the potential carbon uptake of the biosphere in response to anticipated warming and related changes in tree growth and forest extent in these regions. We analyze the hydrological effects of thawing and freezing of soil on vegetation carbon density (VCD) in permafrost-dominated regions of Siberia using a process-based biogeochemistry-biogeography model, the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ-DGVM). The analysis is based on spatially explicit simulations of coupled daily thaw depth, site hydrology, vegetation distribution, and carbon fluxes influencing VCD subject to climate, soil texture, and atmospheric CO2 concentration. LPJ represents the observed high spring peak of runoff of large Arctic rivers, and simulates a realistic fire return interval of 100 to 200 years in Siberia. The simulated VCD changeover from taiga to tundra is comparable to inventory-based information. Without the consideration of freeze-thaw processes VCD would be overestimated by a factor of 2 in southern taiga to a factor of 5 in northern forest tundra, mainly because available soil water would be overestimated with major effects on fire occurrence and net primary productivity. This suggests that forest growth in high latitudes is not only limited by temperature, radiation, and nutrient availability but also by the availability of liquid soil water.

  20. Biogeographical region and host trophic level determine carnivore endoparasite richness in the Iberian Peninsula.

    PubMed

    Rosalino, L M; Santos, M J; Fernandes, C; Santos-Reis, M

    2011-05-01

    We address the question of whether host and/or environmental factors might affect endoparasite richness and distribution, using carnivores as a model. We reviewed studies published in international peer-reviewed journals (34 areas in the Iberian Peninsula), describing parasite prevalence and richness in carnivores, and collected information on site location, host bio-ecology, climate and detected taxa (Helminths, Protozoa and Mycobacterium spp.). Three hypotheses were tested (i) host based, (ii) environmentally based, and (iii) hybrid (combination of environmental and host). Multicollinearity reduced candidate variable number for modelling to 5: host weight, phylogenetic independent contrasts (host weight), mean annual temperature, host trophic level and biogeographical region. General Linear Mixed Modelling was used and the best model was a hybrid model that included biogeographical region and host trophic level. Results revealed that endoparasite richness is higher in Mediterranean areas, especially for the top predators. We suggest that the detected parasites may benefit from mild environmental conditions that occur in southern regions. Top predators have larger home ranges and are likely to be subjected to cascading effects throughout the food web, resulting in more infestation opportunities and potentially higher endoparasite richness. This study suggests that richness may be more affected by historical and regional processes (including climate) than by host ecological processes.

  1. Working-memory capacity protects model-based learning from stress.

    PubMed

    Otto, A Ross; Raio, Candace M; Chiang, Alice; Phelps, Elizabeth A; Daw, Nathaniel D

    2013-12-24

    Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive-dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response--believed to have detrimental effects on prefrontal cortex function--should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress.

  2. Working-memory capacity protects model-based learning from stress

    PubMed Central

    Otto, A. Ross; Raio, Candace M.; Chiang, Alice; Phelps, Elizabeth A.; Daw, Nathaniel D.

    2013-01-01

    Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive–dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response—believed to have detrimental effects on prefrontal cortex function—should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress. PMID:24324166

  3. Cross-disciplinary links in environmental systems science: Current state and claimed needs identified in a meta-review of process models.

    PubMed

    Ayllón, Daniel; Grimm, Volker; Attinger, Sabine; Hauhs, Michael; Simmer, Clemens; Vereecken, Harry; Lischeid, Gunnar

    2018-05-01

    Terrestrial environmental systems are characterised by numerous feedback links between their different compartments. However, scientific research is organized into disciplines that focus on processes within the respective compartments rather than on interdisciplinary links. Major feedback mechanisms between compartments might therefore have been systematically overlooked so far. Without identifying these gaps, initiatives on future comprehensive environmental monitoring schemes and experimental platforms might fail. We performed a comprehensive overview of feedbacks between compartments currently represented in environmental sciences and explores to what degree missing links have already been acknowledged in the literature. We focused on process models as they can be regarded as repositories of scientific knowledge that compile findings of numerous single studies. In total, 118 simulation models from 23 model types were analysed. Missing processes linking different environmental compartments were identified based on a meta-review of 346 published reviews, model intercomparison studies, and model descriptions. Eight disciplines of environmental sciences were considered and 396 linking processes were identified and ascribed to the physical, chemical or biological domain. There were significant differences between model types and scientific disciplines regarding implemented interdisciplinary links. The most wide-spread interdisciplinary links were between physical processes in meteorology, hydrology and soil science that drive or set the boundary conditions for other processes (e.g., ecological processes). In contrast, most chemical and biological processes were restricted to links within the same compartment. Integration of multiple environmental compartments and interdisciplinary knowledge was scarce in most model types. There was a strong bias of suggested future research foci and model extensions towards reinforcing existing interdisciplinary knowledge rather than to open up new interdisciplinary pathways. No clear pattern across disciplines exists with respect to suggested future research efforts. There is no evidence that environmental research would clearly converge towards more integrated approaches or towards an overarching environmental systems theory. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Energy and Transmissibility in Nonlinear Viscous Base Isolators

    NASA Astrophysics Data System (ADS)

    Markou, Athanasios A.; Manolis, George D.

    2016-09-01

    High damping rubber bearings (HDRB) are the most commonly used base isolators in buildings and are often combined with other systems, such as sliding bearings. Their mechanical behaviour is highly nonlinear and dependent on a number of factors. At first, a physical process is suggested here to explain the empirical formula introduced by J.M. Kelly in 1991, where the dissipated energy of a HDRB under cyclic testing, at constant frequency, is proportional to the amplitude of the shear strain, raised to a power of approximately 1.50. This physical process is best described by non-Newtonian fluid behaviour, originally developed by F.H. Norton in 1929 to describe creep in steel at high-temperatures. The constitutive model used includes a viscous term, that depends on the absolute value of the velocity, raised to a non-integer power. The identification of a three parameter Kelvin model, the simplest possible system with nonlinear viscosity, is also suggested here. Furthermore, a more advanced model with variable damping coefficient is implemented to better model in this complex mechanical process. Next, the assumption of strain-rate dependence in their rubber layers under cyclic loading is examined in order to best interpret experimental results on the transmission of motion between the upper and lower surfaces of HDRB. More specifically, the stress-relaxation phenomenon observed with time in HRDB can be reproduced numerically, only if the constitutive model includes a viscous term, that depends on the absolute value of the velocity raised to a non-integer power, i. e., the Norton fluid previously mentioned. Thus, it becomes possible to compute the displacement transmissibility function between the top and bottom surfaces of HDRB base isolator systems and to draw engineering-type conclusions, relevant to their design under time-harmonic loads.

  5. On storm movement and its applications

    NASA Astrophysics Data System (ADS)

    Niemczynowicz, Janusz

    Rainfall-runoff models applicable for design and analysis of sewage systems in urban areas are further developed in order to represent better different physical processes going on on an urban catchment. However, one important part of the modelling procedure, the generation of the rainfall input is still a weak point. The main problem is lack of adequate rainfall data which represent temporal and spatial variations of the natural rainfall process. Storm movement is a natural phenomenon which influences urban runoff. However, the rainfall movement and its influence on runoff generation process is not represented in presently available urban runoff simulation models. Physical description of the rainfall movement and its parameters is given based on detailed measurements performed on twelve gauges in Lund, Sweden. The paper discusses the significance of the rainfall movement on the runoff generation process and gives suggestions how the rainfall movement parameters may be used in runoff modelling.

  6. Towards a consensus-based biokinetic model for green microalgae - The ASM-A.

    PubMed

    Wágner, Dorottya S; Valverde-Pérez, Borja; Sæbø, Mariann; Bregua de la Sotilla, Marta; Van Wagenen, Jonathan; Smets, Barth F; Plósz, Benedek Gy

    2016-10-15

    Cultivation of microalgae in open ponds and closed photobioreactors (PBRs) using wastewater resources offers an opportunity for biochemical nutrient recovery. Effective reactor system design and process control of PBRs requires process models. Several models with different complexities have been developed to predict microalgal growth. However, none of these models can effectively describe all the relevant processes when microalgal growth is coupled with nutrient removal and recovery from wastewaters. Here, we present a mathematical model developed to simulate green microalgal growth (ASM-A) using the systematic approach of the activated sludge modelling (ASM) framework. The process model - identified based on a literature review and using new experimental data - accounts for factors influencing photoautotrophic and heterotrophic microalgal growth, nutrient uptake and storage (i.e. Droop model) and decay of microalgae. Model parameters were estimated using laboratory-scale batch and sequenced batch experiments using the novel Latin Hypercube Sampling based Simplex (LHSS) method. The model was evaluated using independent data obtained in a 24-L PBR operated in sequenced batch mode. Identifiability of the model was assessed. The model can effectively describe microalgal biomass growth, ammonia and phosphate concentrations as well as the phosphorus storage using a set of average parameter values estimated with the experimental data. A statistical analysis of simulation and measured data suggests that culture history and substrate availability can introduce significant variability on parameter values for predicting the reaction rates for bulk nitrate and the intracellularly stored nitrogen state-variables, thereby requiring scenario specific model calibration. ASM-A was identified using standard cultivation medium and it can provide a platform for extensions accounting for factors influencing algal growth and nutrient storage using wastewater resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Granularity as a Cognitive Factor in the Effectiveness of Business Process Model Reuse

    NASA Astrophysics Data System (ADS)

    Holschke, Oliver; Rake, Jannis; Levina, Olga

    Reusing design models is an attractive approach in business process modeling as modeling efficiency and quality of design outcomes may be significantly improved. However, reusing conceptual models is not a cost-free effort, but has to be carefully designed. While factors such as psychological anchoring and task-adequacy in reuse-based modeling tasks have been investigated, information granularity as a cognitive concept has not been at the center of empirical research yet. We hypothesize that business process granularity as a factor in design tasks under reuse has a significant impact on the effectiveness of resulting business process models. We test our hypothesis in a comparative study employing high and low granularities. The reusable processes provided were taken from widely accessible reference models for the telecommunication industry (enhanced Telecom Operations Map). First experimental results show that Recall in tasks involving coarser granularity is lower than in cases of finer granularity. These findings suggest that decision makers in business process management should be considerate with regard to the implementation of reuse mechanisms of different granularities. We realize that due to our small sample size results are not statistically significant, but this preliminary run shows that it is ready for running on a larger scale.

  8. A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network

    NASA Astrophysics Data System (ADS)

    Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.

    Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.

  9. Case based reasoning in criminal intelligence using forensic case data.

    PubMed

    Ribaux, O; Margot, P

    2003-01-01

    A model that is based on the knowledge of experienced investigators in the analysis of serial crime is suggested to bridge a gap between technology and methodology. Its purpose is to provide a solid methodology for the analysis of serial crimes that supports decision making in the deployment of resources, either by guiding proactive policing operations or helping the investigative process. Formalisation has helped to derive a computerised system that efficiently supports the reasoning processes in the analysis of serial crime. This novel approach fully integrates forensic science data.

  10. Dopamine selectively remediates 'model-based' reward learning: a computational approach.

    PubMed

    Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna

    2016-02-01

    Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Human grief: a model for prediction and intervention.

    PubMed

    Bugen, L A

    1977-04-01

    The prevalent approach to understanding of and clinical intervention in the process of mourning employs a model based on stages of bereavement. This paper suggests a theoretical conception that is not tied to a fixed order of emotional states. Two dimensions--closeness of relationship and mourner's perception of preventability of the death--are identified as prime predictors of the intensity and duration of bereavement.

  12. When is the right hemisphere holistic and when is it not? The case of Chinese character recognition.

    PubMed

    Chung, Harry K S; Leung, Jacklyn C Y; Wong, Vienne M Y; Hsiao, Janet H

    2018-05-15

    Holistic processing (HP) has long been considered a characteristic of right hemisphere (RH) processing. Indeed, holistic face processing is typically associated with left visual field (LVF)/RH processing advantages. Nevertheless, expert Chinese character recognition involves reduced HP and increased RH lateralization, presenting a counterexample. Recent modeling research suggests that RH processing may be associated with an increase or decrease in HP, depending on whether spacing or component information was used respectively. Since expert Chinese character recognition involves increasing sensitivity to components while deemphasizing spacing information, RH processing in experts may be associated with weaker HP than novices. Consistent with this hypothesis, in a divided visual field paradigm, novices exhibited HP only in the LVF/RH, whereas experts showed no HP in either visual field. This result suggests that the RH may flexibly switch between part-based and holistic representations, consistent with recent fMRI findings. The RH's advantage in global/low spatial frequency processing is suggested to be relative to the task relevant frequency range. Thus, its use of holistic and part-based representations may depend on how attention is allocated for task relevant information. This study provides the first behavioral evidence showing how type of information used for processing modulates perceptual representations in the RH. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. [Investigation of team processes that enhance team performance in business organization].

    PubMed

    Nawata, Kengo; Yamaguchi, Hiroyuki; Hatano, Toru; Aoshima, Mika

    2015-02-01

    Many researchers have suggested team processes that enhance team performance. However, past team process models were based on crew team, whose all team members perform an indivisible temporary task. These models may be inapplicable business teams, whose individual members perform middle- and long-term tasks assigned to individual members. This study modified the teamwork model of Dickinson and McIntyre (1997) and aimed to demonstrate a whole team process that enhances the performance of business teams. We surveyed five companies (member N = 1,400, team N = 161) and investigated team-level-processes. Results showed that there were two sides of team processes: "communication" and "collaboration to achieve a goal." Team processes in which communication enhanced collaboration improved team performance with regard to all aspects of the quantitative objective index (e.g., current income and number of sales), supervisor rating, and self-rating measurements. On the basis of these results, we discuss the entire process by which teamwork enhances team performance in business organizations.

  14. Dopamine selectively remediates ‘model-based’ reward learning: a computational approach

    PubMed Central

    Sharp, Madeleine E.; Foerde, Karin; Daw, Nathaniel D.

    2016-01-01

    Patients with loss of dopamine due to Parkinson’s disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from ‘model-free’ learning. The other, ‘model-based’ learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson’s disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson’s disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson’s disease may be related to an inability to pursue reward based on complete representations of the environment. PMID:26685155

  15. From Creatures of Habit to Goal-Directed Learners: Tracking the Developmental Emergence of Model-Based Reinforcement Learning.

    PubMed

    Decker, Johannes H; Otto, A Ross; Daw, Nathaniel D; Hartley, Catherine A

    2016-06-01

    Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults performed a sequential reinforcement-learning task that enabled estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was apparent in choice behavior across all age groups, a model-based strategy was absent in children, became evident in adolescents, and strengthened in adults. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior. © The Author(s) 2016.

  16. A process-model based approach to prospective memory impairment in Parkinson's disease.

    PubMed

    Kliegel, Matthias; Altgassen, Mareike; Hering, Alexandra; Rose, Nathan S

    2011-07-01

    The present review discusses the current state of research on the clinical neuropsychology of prospective memory in Parkinson's disease. To do so the paper is divided in two sections. In the first section, we briefly outline key features of the (partly implicit) rationale underlying the available literature on the clinical neuropsychology of prospective memory. Here, we present a conceptual model that guides our approach to the clinical neuropsychology of prospective memory in general and to the effects of Parkinson's disease on prospective memory in particular. In the second section, we use this model to guide our review of the available literature and suggest some open issues and future directions motivated by previous findings and the proposed conceptual model. The review suggests that certain phases of the prospective memory process (intention formation und initiation) are particularly impaired by Parkinson's disease. In addition, it is argued that prospective memory may be preserved when tasks involve specific features (e.g., focal cues) that reduce the need for strategic monitoring processes. In terms of suggestions for future directions, it is noted that intervention studies are needed which target the specific phases of the prospective memory process that are impaired in Parkinson's disease, such as planning interventions. Moreover, it is proposed that prospective memory deficits in Parkinson's disease should be explored in the context of a general impairment in the ability to form an intention and plan or coordinate an appropriate series of actions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Computer-aided classification of breast microcalcification clusters: merging of features from image processing and radiologists

    NASA Astrophysics Data System (ADS)

    Lo, Joseph Y.; Gavrielides, Marios A.; Markey, Mia K.; Jesneck, Jonathan L.

    2003-05-01

    We developed an ensemble classifier for the task of computer-aided diagnosis of breast microcalcification clusters,which are very challenging to characterize for radiologists and computer models alike. The purpose of this study is to help radiologists identify whether suspicious calcification clusters are benign vs. malignant, such that they may potentially recommend fewer unnecessary biopsies for actually benign lesions. The data consists of mammographic features extracted by automated image processing algorithms as well as manually interpreted by radiologists according to a standardized lexicon. We used 292 cases from a publicly available mammography database. From each cases, we extracted 22 image processing features pertaining to lesion morphology, 5 radiologist features also pertaining to morphology, and the patient age. Linear discriminant analysis (LDA) models were designed using each of the three data types. Each local model performed poorly; the best was one based upon image processing features which yielded ROC area index AZ of 0.59 +/- 0.03 and partial AZ above 90% sensitivity of 0.08 +/- 0.03. We then developed ensemble models using different combinations of those data types, and these models all improved performance compared to the local models. The final ensemble model was based upon 5 features selected by stepwise LDA from all 28 available features. This ensemble performed with AZ of 0.69 +/- 0.03 and partial AZ of 0.21 +/- 0.04, which was statistically significantly better than the model based on the image processing features alone (p<0.001 and p=0.01 for full and partial AZ respectively). This demonstrated the value of the radiologist-extracted features as a source of information for this task. It also suggested there is potential for improved performance using this ensemble classifier approach to combine different sources of currently available data.

  18. Assembly processes of gastropod community change with horizontal and vertical zonation in ancient Lake Ohrid: a metacommunity speciation perspective

    NASA Astrophysics Data System (ADS)

    Hauffe, Torsten; Albrecht, Christian; Wilke, Thomas

    2016-05-01

    The Balkan Lake Ohrid is the oldest and most diverse freshwater lacustrine system in Europe. However, it remains unclear whether species community composition, as well as the diversification of its endemic taxa, is mainly driven by dispersal limitation, environmental filtering, or species interaction. This calls for a holistic perspective involving both evolutionary processes and ecological dynamics, as provided by the unifying framework of the "metacommunity speciation model".The current study used the species-rich model taxon Gastropoda to assess how extant communities in Lake Ohrid are structured by performing process-based metacommunity analyses. Specifically, the study aimed (1) to identifying the relative importance of the three community assembly processes and (2) to test whether the importance of these individual processes changes gradually with lake depth or discontinuously with eco-zone shifts.Based on automated eco-zone detection and process-specific simulation steps, we demonstrated that dispersal limitation had the strongest influence on gastropod community composition. However, it was not the exclusive assembly process, but acted together with the other two processes - environmental filtering and species interaction. The relative importance of the community assembly processes varied both with lake depth and eco-zones, though the processes were better predicted by the latter.This suggests that environmental characteristics have a pronounced effect on shaping gastropod communities via assembly processes. Moreover, the study corroborated the high importance of dispersal limitation for both maintaining species richness in Lake Ohrid (through its impact on community composition) and generating endemic biodiversity (via its influence on diversification processes). However, according to the metacommunity speciation model, the inferred importance of environmental filtering and biotic interaction also suggests a small but significant influence of ecological speciation. These findings contribute to the main goal of the Scientific Collaboration on Past Speciation Conditions in Lake Ohrid (SCOPSCO) deep drilling initiative - inferring the drivers of biotic evolution - and might provide an integrative perspective on biological and limnological dynamics in ancient Lake Ohrid.

  19. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  20. Neural Substrates of Processing Anger in Language: Contributions of Prosody and Semantics.

    PubMed

    Castelluccio, Brian C; Myers, Emily B; Schuh, Jillian M; Eigsti, Inge-Marie

    2016-12-01

    Emotions are conveyed primarily through two channels in language: semantics and prosody. While many studies confirm the role of a left hemisphere network in processing semantic emotion, there has been debate over the role of the right hemisphere in processing prosodic emotion. Some evidence suggests a preferential role for the right hemisphere, and other evidence supports a bilateral model. The relative contributions of semantics and prosody to the overall processing of affect in language are largely unexplored. The present work used functional magnetic resonance imaging to elucidate the neural bases of processing anger conveyed by prosody or semantic content. Results showed a robust, distributed, bilateral network for processing angry prosody and a more modest left hemisphere network for processing angry semantics when compared to emotionally neutral stimuli. Findings suggest the nervous system may be more responsive to prosodic cues in speech than to the semantic content of speech.

  1. Personality Assessment in the Schools: Issues and Procedures for School Psychologists.

    ERIC Educational Resources Information Center

    Knoff, Howard M.

    1983-01-01

    A conceptual model for school-based personality assessment, methods to integrate behavioral and projective assessment procedures, and issues surrounding the use of projective tests are presented. Ways to maximize the personality assessment process for use in placement and programing decisions are suggested. (Author/DWH)

  2. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  3. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, Seulgi; Shelef, Eitan; Hilley, George E.

    2015-05-01

    In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.

  4. Magma transport and metasomatism in the mantle: a critical review of current geochemical models

    USGS Publications Warehouse

    Nielson, J.E.; Wilshire, H.G.

    1993-01-01

    Conflicting geochemical models of metasomatic interactions between mantle peridotite and melt all assume that mantle reactions reflect chromatographic processes. Examination of field, petrological, and compositional data suggests that the hypothesis of chromatographic fractionation based on the supposition of large-scale percolative processes needs review and revision. Well-constrained rock and mineral data from xenoliths indicate that many elements that behave incompatibly in equilibrium crystallization processes are absorbed immediately when melts emerge from conduits into depleted peridotite. After reacting to equilibrium with the peridotite, melt that percolates away from the conduit is largely depleted of incompatible elements. Continued addition of melts extends the zone of equilibrium farther from the conduit. Such a process resembles ion-exchange chromatography for H2O purification, rather than the model of chromatographic species separation. -from Authors

  5. Learning and evolution in bacterial taxis: an operational amplifier circuit modeling the computational dynamics of the prokaryotic 'two component system' protein network.

    PubMed

    Di Paola, Vieri; Marijuán, Pedro C; Lahoz-Beltra, Rafael

    2004-01-01

    Adaptive behavior in unicellular organisms (i.e., bacteria) depends on highly organized networks of proteins governing purposefully the myriad of molecular processes occurring within the cellular system. For instance, bacteria are able to explore the environment within which they develop by utilizing the motility of their flagellar system as well as a sophisticated biochemical navigation system that samples the environmental conditions surrounding the cell, searching for nutrients or moving away from toxic substances or dangerous physical conditions. In this paper we discuss how proteins of the intervening signal transduction network could be modeled as artificial neurons, simulating the dynamical aspects of the bacterial taxis. The model is based on the assumption that, in some important aspects, proteins can be considered as processing elements or McCulloch-Pitts artificial neurons that transfer and process information from the bacterium's membrane surface to the flagellar motor. This simulation of bacterial taxis has been carried out on a hardware realization of a McCulloch-Pitts artificial neuron using an operational amplifier. Based on the behavior of the operational amplifier we produce a model of the interaction between CheY and FliM, elements of the prokaryotic two component system controlling chemotaxis, as well as a simulation of learning and evolution processes in bacterial taxis. On the one side, our simulation results indicate that, computationally, these protein 'switches' are similar to McCulloch-Pitts artificial neurons, suggesting a bridge between evolution and learning in dynamical systems at cellular and molecular levels and the evolutive hardware approach. On the other side, important protein 'tactilizing' properties are not tapped by the model, and this suggests further complexity steps to explore in the approach to biological molecular computing.

  6. Bayesian techniques for analyzing group differences in the Iowa Gambling Task: A case study of intuitive and deliberate decision-makers.

    PubMed

    Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D

    2018-06-01

    The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.

  7. MAVEN-SA: Model-Based Automated Visualization for Enhanced Situation Awareness

    DTIC Science & Technology

    2005-11-01

    34 methods. But historically, as arts evolve, these how to methods become systematized and codified (e.g. the development and refinement of color theory ...schema (as necessary) 3. Draw inferences from new knowledge to support decision making process 33 Visual language theory suggests that humans process...informed by theories of learning. Over the years, many types of software have been developed to support student learning. The various types of

  8. Diagnostic reasoning: where we've been, where we're going.

    PubMed

    Monteiro, Sandra M; Norman, Geoffrey

    2013-01-01

    Recently, clinical diagnostic reasoning has been characterized by "dual processing" models, which postulate a fast, unconscious (System 1) component and a slow, logical, analytical (System 2) component. However, there are a number of variants of this basic model, which may lead to conflicting claims. This paper critically reviews current theories and evidence about the nature of clinical diagnostic reasoning. We begin by briefly discussing the history of research in clinical reasoning. We then focus more specifically on the evidence to support dual-processing models. We conclude by identifying knowledge gaps about clinical reasoning and provide suggestions for future research. In contrast to work on analytical and nonanalytical knowledge as a basis for reasoning, these theories focus on the thinking process, not the nature of the knowledge retrieved. Ironically, this appears to be a revival of an outdated concept. Rather than defining diagnostic performance by problem-solving skills, it is now being defined by processing strategy. The version of dual processing that has received most attention in the literature in medical diagnosis might be labeled a "default/interventionist" model,(17) which suggests that a default system of cognitive processes (System 1) is responsible for cognitive biases that lead to diagnostic errors and that System 2 intervenes to correct these errors. Consequently, from this model, the best strategy for reducing errors is to make students aware of the biases and to encourage them to rely more on System 2. However, an accumulation of evidence suggests that (a) strategies directed at increasing analytical (System 2) processing, by slowing down, reducing distractions, paying conscious attention, and (b) strategies directed at making students aware of the effect of cognitive biases, have no impact on error rates. Conversely, strategies based on increasing application of relevant knowledge appear to have some success and are consistent with basic research on concept formation.

  9. The DAB model of drawing processes

    NASA Technical Reports Server (NTRS)

    Hochhaus, Larry W.

    1989-01-01

    The problem of automatic drawing was investigated in two ways. First, a DAB model of drawing processes was introduced. DAB stands for three types of knowledge hypothesized to support drawing abilities, namely, Drawing Knowledge, Assimilated Knowledge, and Base Knowledge. Speculation concerning the content and character of each of these subsystems of the drawing process is introduced and the overall adequacy of the model is evaluated. Second, eight experts were each asked to understand six engineering drawings and to think aloud while doing so. It is anticipated that a concurrent protocol analysis of these interviews can be carried out in the future. Meanwhile, a general description of the videotape database is provided. In conclusion, the DAB model was praised as a worthwhile first step toward solution of a difficult problem, but was considered by and large inadequate to the challenge of automatic drawing. Suggestions for improvements on the model were made.

  10. Nitrogen gas emissions and nitrate leaching dynamics under different tillage practices based on data synthesis and process-based modeling

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Ren, W.; Tao, B.; Zhu, X.

    2017-12-01

    Nitrogen losses from the agroecosystems have been of great concern to global changes due to the effects on global warming and water pollution in the form of nitrogen gas emissions (e.g., N2O) and mineral nitrogen leaching (e.g., NO3-), respectively. Conservation tillage, particularly no-tillage (NT), may enhance soil carbon sequestration, soil aggregation and moisture; therefore it has the potential of promoting N2O emissions and reducing NO3- leaching, comparing with conventional tillage (CT). However, associated processes are significantly affected by various factors, such as soil properties, climate, and crop types. How tillage management practices affect nitrogen transformations and fluxes is still far from clear, with inconsistent even opposite results from previous studies. To fill this knowledge gap, we quantitatively investigated gaseous and leaching nitrogen losses from NT and CT agroecosystems based on data synthesis and an improved process-based agroecosystem model. Our preliminary results suggest that NT management is more efficient in reducing NO3- leaching, and meanwhile it simultaneously increases N2O emissions by approximately 10% compared with CT. The effects of NT on N2O emissions and NO3- leaching are highly influenced by the placement of nitrogen fertilizer and are more pronounced in humid climate conditions. The effect of crop types is a less dominant factor in determining N2O and NO3- losses. Both our data synthesis and process-based modeling suggest that the enhanced carbon sequestration capacity from NT could be largely compromised by relevant NT-induced increases in N2O emissions. This study provides the comprehensive quantitative assessment of NT on the nitrogen emissions and leaching in agroecosystems. It provides scientific information for identifying proper management practices for ensuring food security and minimizing the adverse environmental impacts. The results also underscore the importance of suitable nitrogen management in the NT agroecosystems for climate adaptation and mitigation.

  11. Cognitive changes in conjunctive rule-based category learning: An ERP approach.

    PubMed

    Rabi, Rahel; Joanisse, Marc F; Zhu, Tianshu; Minda, John Paul

    2018-06-25

    When learning rule-based categories, sufficient cognitive resources are needed to test hypotheses, maintain the currently active rule in working memory, update rules after feedback, and to select a new rule if necessary. Prior research has demonstrated that conjunctive rules are more complex than unidimensional rules and place greater demands on executive functions like working memory. In our study, event-related potentials (ERPs) were recorded while participants performed a conjunctive rule-based category learning task with trial-by-trial feedback. In line with prior research, correct categorization responses resulted in a larger stimulus-locked late positive complex compared to incorrect responses, possibly indexing the updating of rule information in memory. Incorrect trials elicited a pronounced feedback-locked P300 elicited which suggested a disconnect between perception, and the rule-based strategy. We also examined the differential processing of stimuli that were able to be correctly classified by the suboptimal single-dimensional rule ("easy" stimuli) versus those that could only be correctly classified by the optimal, conjunctive rule ("difficult" stimuli). Among strong learners, a larger, late positive slow wave emerged for difficult compared with easy stimuli, suggesting differential processing of category items even though strong learners performed well on the conjunctive category set. Overall, the findings suggest that ERP combined with computational modelling can be used to better understand the cognitive processes involved in rule-based category learning.

  12. A spatially explicit hydro-ecological modeling framework (BEPS-TerrainLab V2.0): Model description and test in a boreal ecosystem in Eastern North America

    NASA Astrophysics Data System (ADS)

    Govind, Ajit; Chen, Jing Ming; Margolis, Hank; Ju, Weimin; Sonnentag, Oliver; Giasson, Marc-André

    2009-04-01

    SummaryA spatially explicit, process-based hydro-ecological model, BEPS-TerrainLab V2.0, was developed to improve the representation of ecophysiological, hydro-ecological and biogeochemical processes of boreal ecosystems in a tightly coupled manner. Several processes unique to boreal ecosystems were implemented including the sub-surface lateral water fluxes, stratification of vegetation into distinct layers for explicit ecophysiological representation, inclusion of novel spatial upscaling strategies and biogeochemical processes. To account for preferential water fluxes common in humid boreal ecosystems, a novel scheme was introduced based on laboratory analyses. Leaf-scale ecophysiological processes were upscaled to canopy-scale by explicitly considering leaf physiological conditions as affected by light and water stress. The modified model was tested with 2 years of continuous measurements taken at the Eastern Old Black Spruce Site of the Fluxnet-Canada Research Network located in a humid boreal watershed in eastern Canada. Comparison of the simulated and measured ET, water-table depth (WTD), volumetric soil water content (VSWC) and gross primary productivity (GPP) revealed that BEPS-TerrainLab V2.0 simulates hydro-ecological processes with reasonable accuracy. The model was able to explain 83% of the ET, 92% of the GPP variability and 72% of the WTD dynamics. The model suggests that in humid ecosystems such as eastern North American boreal watersheds, topographically driven sub-surface baseflow is the main mechanism of soil water partitioning which significantly affects the local-scale hydrological conditions.

  13. How Does Higher Frequency Monitoring Data Affect the Calibration of a Process-Based Water Quality Model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L.

    2014-12-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.

  14. A model for prediction of STOVL ejector dynamics

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1989-01-01

    A semi-empirical control-volume approach to ejector modeling for transient performance prediction is presented. This new approach is motivated by the need for a predictive real-time ejector sub-system simulation for Short Take-Off Verticle Landing (STOVL) integrated flight and propulsion controls design applications. Emphasis is placed on discussion of the approximate characterization of the mixing process central to thrust augmenting ejector operation. The proposed ejector model suggests transient flow predictions are possible with a model based on steady-flow data. A practical test case is presented to illustrate model calibration.

  15. Sensitivity to the Sampling Process Emerges From the Principle of Efficiency.

    PubMed

    Jara-Ettinger, Julian; Sun, Felix; Schulz, Laura; Tenenbaum, Joshua B

    2018-05-01

    Humans can seamlessly infer other people's preferences, based on what they do. Broadly, two types of accounts have been proposed to explain different aspects of this ability. The first account focuses on spatial information: Agents' efficient navigation in space reveals what they like. The second account focuses on statistical information: Uncommon choices reveal stronger preferences. Together, these two lines of research suggest that we have two distinct capacities for inferring preferences. Here we propose that this is not the case, and that spatial-based and statistical-based preference inferences can be explained by the assumption that agents are efficient alone. We show that people's sensitivity to spatial and statistical information when they infer preferences is best predicted by a computational model of the principle of efficiency, and that this model outperforms dual-system models, even when the latter are fit to participant judgments. Our results suggest that, as adults, a unified understanding of agency under the principle of efficiency underlies our ability to infer preferences. Copyright © 2018 Cognitive Science Society, Inc.

  16. From spatially variable streamflow to distributed hydrological models: Analysis of key modeling decisions

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Kavetski, Dmitri; Savenije, Hubert H. G.; Pfister, Laurent

    2016-02-01

    This paper explores the development and application of distributed hydrological models, focusing on the key decisions of how to discretize the landscape, which model structures to use in each landscape element, and how to link model parameters across multiple landscape elements. The case study considers the Attert catchment in Luxembourg—a 300 km2 mesoscale catchment with 10 nested subcatchments that exhibit clearly different streamflow dynamics. The research questions are investigated using conceptual models applied at hydrologic response unit (HRU) scales (1-4 HRUs) on 6 hourly time steps. Multiple model structures are hypothesized and implemented using the SUPERFLEX framework. Following calibration, space/time model transferability is tested using a split-sample approach, with evaluation criteria including streamflow prediction error metrics and hydrological signatures. Our results suggest that: (1) models using geology-based HRUs are more robust and capture the spatial variability of streamflow time series and signatures better than models using topography-based HRUs; this finding supports the hypothesis that, in the Attert, geology exerts a stronger control than topography on streamflow generation, (2) streamflow dynamics of different HRUs can be represented using distinct and remarkably simple model structures, which can be interpreted in terms of the perceived dominant hydrologic processes in each geology type, and (3) the same maximum root zone storage can be used across the three dominant geological units with no loss in model transferability; this finding suggests that the partitioning of water between streamflow and evaporation in the study area is largely independent of geology and can be used to improve model parsimony. The modeling methodology introduced in this study is general and can be used to advance our broader understanding and prediction of hydrological behavior, including the landscape characteristics that control hydrologic response, the dominant processes associated with different landscape types, and the spatial relations of catchment processes. This article was corrected on 14 MAR 2016. See the end of the full text for details.

  17. Stimulating Scientific Reasoning with Drawing-Based Modeling

    NASA Astrophysics Data System (ADS)

    Heijnes, Dewi; van Joolingen, Wouter; Leenaars, Frank

    2018-02-01

    We investigate the way students' reasoning about evolution can be supported by drawing-based modeling. We modified the drawing-based modeling tool SimSketch to allow for modeling evolutionary processes. In three iterations of development and testing, students in lower secondary education worked on creating an evolutionary model. After each iteration, the user interface and instructions were adjusted based on students' remarks and the teacher's observations. Students' conversations were analyzed on reasoning complexity as a measurement of efficacy of the modeling tool and the instructions. These findings were also used to compose a set of recommendations for teachers and curriculum designers for using and constructing models in the classroom. Our findings suggest that to stimulate scientific reasoning in students working with a drawing-based modeling, tool instruction about the tool and the domain should be integrated. In creating models, a sufficient level of scaffolding is necessary. Without appropriate scaffolds, students are not able to create the model. With scaffolding that is too high, students may show reasoning that incorrectly assigns external causes to behavior in the model.

  18. Analysis of the Temperature and Strain-Rate Dependences of Strain Hardening

    NASA Astrophysics Data System (ADS)

    Kreyca, Johannes; Kozeschnik, Ernst

    2018-01-01

    A classical constitutive modeling-based Ansatz for the impact of thermal activation on the stress-strain response of metallic materials is compared with the state parameter-based Kocks-Mecking model. The predicted functional dependencies suggest that, in the first approach, only the dislocation storage mechanism is a thermally activated process, whereas, in the second approach, only the mechanism of dynamic recovery is. In contradiction to each of these individual approaches, our analysis and comparison with experimental evidence shows that thermal activation contributes both to dislocation generation and annihilation.

  19. Attractor Dynamics and Semantic Neighborhood Density: Processing Is Slowed by Near Neighbors and Speeded by Distant Neighbors

    PubMed Central

    Mirman, Daniel; Magnuson, James S.

    2008-01-01

    The authors investigated semantic neighborhood density effects on visual word processing to examine the dynamics of activation and competition among semantic representations. Experiment 1 validated feature-based semantic representations as a basis for computing semantic neighborhood density and suggested that near and distant neighbors have opposite effects on word processing. Experiment 2 confirmed these results: Word processing was slower for dense near neighborhoods and faster for dense distant neighborhoods. Analysis of a computational model showed that attractor dynamics can produce this pattern of neighborhood effects. The authors argue for reconsideration of traditional models of neighborhood effects in terms of attractor dynamics, which allow both inhibitory and facilitative effects to emerge. PMID:18194055

  20. Comparing cropland net primary production estimates from inventory, a satellite-based model, and a process-based model in the Midwest of the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi

    2014-04-01

    Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m -2 yr -1 and total NPP in the range of 318–490more » Tg C yr -1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m -2 yr -1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m -2 yr -1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. Finally, we suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.« less

  1. Comparing cropland net primary production estimates from inventory, a satellite-based model, and a process-based model in the Midwest of the United States

    USGS Publications Warehouse

    Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi; Bliss, Norman B.; Young, Claudia J.; West, Tristram O.; Ogle, Stephen M.

    2014-01-01

    Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m−2 yr−1and total NPP in the range of 318–490 Tg C yr−1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m−2 yr−1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m−2 yr−1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. We suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.

  2. Smart City Mobility Application--Gradient Boosting Trees for Mobility Prediction and Analysis Based on Crowdsourced Data.

    PubMed

    Semanjski, Ivana; Gautama, Sidharta

    2015-07-03

    Mobility management represents one of the most important parts of the smart city concept. The way we travel, at what time of the day, for what purposes and with what transportation modes, have a pertinent impact on the overall quality of life in cities. To manage this process, detailed and comprehensive information on individuals' behaviour is needed as well as effective feedback/communication channels. In this article, we explore the applicability of crowdsourced data for this purpose. We apply a gradient boosting trees algorithm to model individuals' mobility decision making processes (particularly concerning what transportation mode they are likely to use). To accomplish this we rely on data collected from three sources: a dedicated smartphone application, a geographic information systems-based web interface and weather forecast data collected over a period of six months. The applicability of the developed model is seen as a potential platform for personalized mobility management in smart cities and a communication tool between the city (to steer the users towards more sustainable behaviour by additionally weighting preferred suggestions) and users (who can give feedback on the acceptability of the provided suggestions, by accepting or rejecting them, providing an additional input to the learning process).

  3. The impact of primary care reform on health system performance in Canada: a systematic review.

    PubMed

    Carter, Renee; Riverin, Bruno; Levesque, Jean-Frédéric; Gariepy, Geneviève; Quesnel-Vallée, Amélie

    2016-07-30

    We aimed to synthesize the evidence of a causal effect and draw inferences about whether Canadian primary care reforms improved health system performance based on measures of health service utilization, processes of care, and physician productivity. We searched the Embase, PubMed and Web of Science databases for records from 2000 to September 2015. We based our risk of bias assessment on the Grading of Recommendations Assessment, Development and Evaluation guidelines. Full-text studies were synthesized and organized according to the three outcome categories: health service utilization, processes of care, and physician costs and productivity. We found moderate quality evidence that team-based models of care led to reductions in emergency department use, but the evidence was mixed for hospital admissions. We also found low quality evidence that team-based models, blended capitation models and pay-for-performance incentives led to small and sometimes non-significant improvements in processes of care. Studies examining new payment models on physician costs and productivity were of high methodological quality and provided a coherent body of evidence assessing enhanced fee-for-service and blended capitation payment models. A small number of studies suggested that team-based models contributed to reductions in emergency department use in Quebec and Alberta. Regarding processes of diabetes care, studies found higher rates of testing for blood glucose levels, retinopathy and cholesterol in Alberta's team-based primary care model and in practices eligible for pay-for-performance incentives in Ontario. However pay-for-performance in Ontario was found to have null to moderate effects on other prevention and screening activities. Although blended capitation payment in Ontario contributed to decreases in the number of services delivered and patients seen per day, the number of enrolled patients and number of days worked in a year was similar to that of enhanced fee-for-service practices.

  4. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D.; Godtliebsen, F.; Rue, H.

    2012-04-01

    Detailed knowledge of past climate variations is of high importance for gaining a better insight into the possible future climate scenarios. The relative shortness of available high quality instrumental climate data conditions the use of various climate proxy archives in making inference about past climate evolution. It, however, requires an accurate assessment of timescale errors in proxy-based paleoclimatic reconstructions. We here propose an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models constructed using tie points of mixed origin.

  5. Distillation and Air Stripping Designs for the Lunar Surface

    NASA Technical Reports Server (NTRS)

    Boul, Peter J.; Lange, Kevin E.; Conger, Bruce; Anderson, Molly

    2009-01-01

    Air stripping and distillation are two different gravity-based methods, which may be applied to the purification of wastewater on the lunar base. These gravity-based solutions to water processing are robust physical separation techniques, which may be advantageous to many other techniques for their simplicity in design and operation. The two techniques can be used in conjunction with each other to obtain high purity water. The components and feed compositions for modeling waste water streams are presented in conjunction with the Aspen property system for traditional stage distillation models and air stripping models. While the individual components for each of the waste streams will vary naturally within certain bounds, an analog model for waste water processing is suggested based on typical concentration ranges for these components. Target purity levels for the for recycled water are determined for each individual component based on NASA s required maximum contaminant levels for potable water Distillation processes are modeled separately and in tandem with air stripping to demonstrate the potential effectiveness and utility of these methods in recycling wastewater on the Moon. Optimum parameters such as reflux ratio, feed stage location, and processing rates are determined with respect to the power consumption of the process. Multistage distillation is evaluated for components in wastewater to determine the minimum number of stages necessary for each of 65 components in humidity condensate and urine wastewater mixed streams. Components of the wastewater streams are ranked by Henry s Law Constant and the suitability of air stripping in the purification of wastewater in terms of component removal is evaluated. Scaling factors for distillation and air stripping columns are presented to account for the difference in the lunar gravitation environment. Commercially available distillation and air stripping units which are considered suitable for Exploration Life Support are presented. The advantages to the various designs are summarized with respect to water purity levels, power consumption, and processing rates.

  6. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    PubMed Central

    Zhu, Qing; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations. PMID:25061614

  7. Day-ahead crude oil price forecasting using a novel morphological component analysis based model.

    PubMed

    Zhu, Qing; He, Kaijian; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations.

  8. Computational cognitive modeling of the temporal dynamics of fatigue from sleep loss.

    PubMed

    Walsh, Matthew M; Gunzelmann, Glenn; Van Dongen, Hans P A

    2017-12-01

    Computational models have become common tools in psychology. They provide quantitative instantiations of theories that seek to explain the functioning of the human mind. In this paper, we focus on identifying deep theoretical similarities between two very different models. Both models are concerned with how fatigue from sleep loss impacts cognitive processing. The first is based on the diffusion model and posits that fatigue decreases the drift rate of the diffusion process. The second is based on the Adaptive Control of Thought - Rational (ACT-R) cognitive architecture and posits that fatigue decreases the utility of candidate actions leading to microlapses in cognitive processing. A biomathematical model of fatigue is used to control drift rate in the first account and utility in the second. We investigated the predicted response time distributions of these two integrated computational cognitive models for performance on a psychomotor vigilance test under conditions of total sleep deprivation, simulated shift work, and sustained sleep restriction. The models generated equivalent predictions of response time distributions with excellent goodness-of-fit to the human data. More importantly, although the accounts involve different modeling approaches and levels of abstraction, they represent the effects of fatigue in a functionally equivalent way: in both, fatigue decreases the signal-to-noise ratio in decision processes and decreases response inhibition. This convergence suggests that sleep loss impairs psychomotor vigilance performance through degradation of the quality of cognitive processing, which provides a foundation for systematic investigation of the effects of sleep loss on other aspects of cognition. Our findings illustrate the value of treating different modeling formalisms as vehicles for discovery.

  9. TESTING AN INTEGRATED MODEL OF PROGRAM IMPLEMENTATION: THE FOOD, HEALTH & CHOICES SCHOOL-BASED CHILDHOOD OBESITY PREVENTION INTERVENTION PROCESS EVALUATION

    PubMed Central

    Gray, Heewon Lee; Tipton, Elizabeth; Contento, Isobel; Koch, Pamela

    2016-01-01

    Childhood obesity is a complex, worldwide problem. Significant resources are invested in its prevention, and high-quality evaluations of these efforts are important. Conducting trials in school settings is complicated, making process evaluations useful for explaining results. Intervention fidelity has been demonstrated to influence outcomes, but others have suggested that other aspects of implementation, including participant responsiveness, should be examined more systematically. During Food, Health & Choices (FHC), a school-based childhood obesity prevention trial designed to test a curriculum and wellness policy taught by trained FHC instructors to fifth grade students in 20 schools during 2012–2013, we assessed relationships among facilitator behaviors (i.e., fidelity and teacher interest), participant behaviors (i.e., student satisfaction and recall), and program outcomes (i.e., energy balance-related behaviors) using hierarchical linear models, controlling for student, class, and school characteristics. We found positive relationships between student satisfaction and recall and program outcomes, but not fidelity and program outcomes. We also found relationships between teacher interest and fidelity when teachers participated in implementation. Finally, we found a significant interaction between fidelity and satisfaction on behavioral outcomes. These findings suggest that individual students in the same class responded differently to the same intervention. They also suggest the importance of teacher buy-in for successful intervention implementation. Future studies should examine how facilitator and participant behaviors together are related to both outcomes and implementation. Assessing multiple aspects of implementation using models that account for contextual influences on behavioral outcomes is an important step forward for prevention intervention process evaluations. PMID:27921200

  10. Testing an Integrated Model of Program Implementation: the Food, Health & Choices School-Based Childhood Obesity Prevention Intervention Process Evaluation.

    PubMed

    Burgermaster, Marissa; Gray, Heewon Lee; Tipton, Elizabeth; Contento, Isobel; Koch, Pamela

    2017-01-01

    Childhood obesity is a complex, worldwide problem. Significant resources are invested in its prevention, and high-quality evaluations of these efforts are important. Conducting trials in school settings is complicated, making process evaluations useful for explaining results. Intervention fidelity has been demonstrated to influence outcomes, but others have suggested that other aspects of implementation, including participant responsiveness, should be examined more systematically. During Food, Health & Choices (FHC), a school-based childhood obesity prevention trial designed to test a curriculum and wellness policy taught by trained FHC instructors to fifth grade students in 20 schools during 2012-2013, we assessed relationships among facilitator behaviors (i.e., fidelity and teacher interest); participant behaviors (i.e., student satisfaction and recall); and program outcomes (i.e., energy balance-related behaviors) using hierarchical linear models, controlling for student, class, and school characteristics. We found positive relationships between student satisfaction and recall and program outcomes, but not fidelity and program outcomes. We also found relationships between teacher interest and fidelity when teachers participated in implementation. Finally, we found a significant interaction between fidelity and satisfaction on behavioral outcomes. These findings suggest that individual students in the same class responded differently to the same intervention. They also suggest the importance of teacher buy-in for successful intervention implementation. Future studies should examine how facilitator and participant behaviors together are related to both outcomes and implementation. Assessing multiple aspects of implementation using models that account for contextual influences on behavioral outcomes is an important step forward for prevention intervention process evaluations.

  11. Modeling of high‐frequency seismic‐wave scattering and propagation using radiative transfer theory

    USGS Publications Warehouse

    Zeng, Yuehua

    2017-01-01

    This is a study of the nonisotropic scattering process based on radiative transfer theory and its application to the observation of the M 4.3 aftershock recording of the 2008 Wells earthquake sequence in Nevada. Given a wide range of recording distances from 29 to 320 km, the data provide a unique opportunity to discriminate scattering models based on their distance‐dependent behaviors. First, we develop a stable numerical procedure to simulate nonisotropic scattering waves based on the 3D nonisotropic scattering theory proposed by Sato (1995). By applying the simulation method to the inversion of M 4.3 Wells aftershock recordings, we find that a nonisotropic scattering model, dominated by forward scattering, provides the best fit to the observed high‐frequency direct S waves and S‐wave coda velocity envelopes. The scattering process is governed by a Gaussian autocorrelation function, suggesting a Gaussian random heterogeneous structure for the Nevada crust. The model successfully explains the common decay of seismic coda independent of source–station locations as a result of energy leaking from multiple strong forward scattering, instead of backscattering governed by the diffusion solution at large lapse times. The model also explains the pulse‐broadening effect in the high‐frequency direct and early arriving S waves, as other studies have found, and could be very important to applications of high‐frequency wave simulation in which scattering has a strong effect. We also find that regardless of its physical implications, the isotropic scattering model provides the same effective scattering coefficient and intrinsic attenuation estimates as the forward scattering model, suggesting that the isotropic scattering model is still a viable tool for the study of seismic scattering and intrinsic attenuation coefficients in the Earth.

  12. Can cognitive psychological research on reasoning enhance the discussion around moral judgments?

    PubMed

    Bialek, Michal; Terbeck, Sylvia

    2016-08-01

    In this article we will demonstrate how cognitive psychological research on reasoning and decision making could enhance discussions and theories of moral judgments. In the first part, we will present recent dual-process models of moral judgments and describe selected studies which support these approaches. However, we will also present data that contradict the model predictions, suggesting that approaches to moral judgment might be more complex. In the second part, we will show how cognitive psychological research on reasoning might be helpful in understanding moral judgments. Specifically, we will highlight approaches addressing the interaction between intuition and reflection. Our data suggest that a sequential model of engaging in deliberation might have to be revised. Therefore, we will present an approach based on Signal Detection Theory and on intuitive conflict detection. We predict that individuals arrive at the moral decisions by comparing potential action outcomes (e.g., harm caused and utilitarian gain) simultaneously. The response criterion can be influenced by intuitive processes, such as heuristic moral value processing, or considerations of harm caused.

  13. The Function of Educational Administration in the Processes of Cultural Transmission.

    ERIC Educational Resources Information Center

    Bates, Richard J.

    A study of the implementation of a rational/bureaucratic model of knowledge in classrooms suggests that current modes of educational administration are based on control, via rational planning, of social relations, individual consciousness, and epistemology. Bureaucratic organization and professionalism enjoy a symbiotic relationship, combined with…

  14. An examination of fuel particle heating during fire spread

    Treesearch

    Jack D. Cohen; Mark A. Finney

    2010-01-01

    Recent high intensity wildfires and our demonstrated inability to control extreme fire behavior suggest a need for alternative approaches for preventing wildfire disasters. Current fire spread models are not sufficiently based on a basic understanding of fire spread processes to provide more effective management alternatives. An experimental and theoretical approach...

  15. Perception Accuracy of Affiliative Relationships in Elementary School Children and Young Adolescents

    PubMed Central

    Daniel, João R.; Silva, Rita R.; Santos, António J.; Cardoso, Jordana; Coelho, Leandra; Freitas, Miguel; Ribeiro, Olívia

    2017-01-01

    There has been a rapid growth of studies focused on selection and socialization processes of peer groups, mostly due to the development of stochastic actor-based models to analyze longitudinal social network data. One of the core assumptions of these models is that individuals have an accurate knowledge of the dyadic relationships within their network (i.e., who is and is not connected to whom). Recent cross-sectional findings suggest that elementary school children are very inaccurate in perceiving their classmates’ dyadic relationships. These findings question the validity of stochastic actor-based models to study the developmental dynamics of children and carry implications for future research as well as for the interpretation of past findings. The goal of the present study was thus to further explore the adequacy of the accuracy assumption, analysing data from three longitudinal samples of different age groups (elementary school children and adolescents). Our results support the validity of stochastic actor-based models to study the network of adolescents and suggest that the violation of the accuracy assumption for elementary school children is not as severe as previously thought. PMID:29163310

  16. Automatic computation for optimum height planning of apartment buildings to improve solar access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seong, Yoon-Bok; Kim, Yong-Yee; Seok, Ho-Tae

    2011-01-15

    The objective of this study is to suggest a mathematical model and an optimal algorithm for determining the height of apartment buildings to satisfy the solar rights of survey buildings or survey housing units. The objective is also to develop an automatic computation model for the optimum height of apartment buildings and then to clarify the performance and expected effects. To accomplish the objective of this study, the following procedures were followed: (1) The necessity of the height planning of obstruction buildings to satisfy the solar rights of survey buildings or survey housing units is demonstrated by analyzing through amore » literature review the recent trend of disputes related to solar rights and to examining the social requirements in terms of solar rights. In addition, the necessity of the automatic computation system for height planning of apartment buildings is demonstrated and a suitable analysis method for this system is chosen by investigating the characteristics of analysis methods for solar rights assessment. (2) A case study on the process of height planning of apartment buildings will be briefly described and the problems occurring in this process will then be examined carefully. (3) To develop an automatic computation model for height planning of apartment buildings, geometrical elements forming apartment buildings are defined by analyzing the geometrical characteristics of apartment buildings. In addition, design factors and regulations required in height planning of apartment buildings are investigated. Based on this knowledge, the methodology and mathematical algorithm to adjust the height of apartment buildings by automatic computation are suggested and probable problems and the ways to resolve these problems are discussed. Finally, the methodology and algorithm for the optimization are suggested. (4) Based on the suggested methodology and mathematical algorithm, the automatic computation model for optimum height of apartment buildings is developed and the developed system is verified through the application of some cases. The effects of the suggested model are then demonstrated quantitatively and qualitatively. (author)« less

  17. Learning to read aloud: A neural network approach using sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Joglekar, Umesh Dwarkanath

    1989-01-01

    An attempt to solve a problem of text-to-phoneme mapping is described which does not appear amenable to solution by use of standard algorithmic procedures. Experiments based on a model of distributed processing are also described. This model (sparse distributed memory (SDM)) can be used in an iterative supervised learning mode to solve the problem. Additional improvements aimed at obtaining better performance are suggested.

  18. Leveraging the Affordances of YouTube: The Role of Pedagogical Knowledge and Mental Models of Technology Functions for Lesson Planning with Technology

    ERIC Educational Resources Information Center

    Krauskopf, Karsten; Zahn, Carmen; Hesse, Friedrich W.

    2012-01-01

    Web-based digital video tools enable learners to access video sources in constructive ways. To leverage these affordances teachers need to integrate their knowledge of a technology with their professional knowledge about teaching. We suggest that this is a cognitive process, which is strongly connected to a teacher's mental model of the tool's…

  19. Using the Maximum Entropy Principle as a Unifying Theory Characterization and Sampling of Multi-Scaling Processes in Hydrometeorology

    DTIC Science & Technology

    2015-08-20

    evapotranspiration (ET) over oceans may be significantly lower than previously thought. The MEP model parameterized turbulent transfer coefficients...fluxes, ocean freshwater fluxes, regional crop yield among others. An on-going study suggests that the global annual evapotranspiration (ET) over...Bras, Jingfeng Wang. A model of evapotranspiration based on the theory of maximum entropy production, Water Resources Research, (03 2011): 0. doi

  20. Observations and modeling of methane flux in northern wetlands

    NASA Astrophysics Data System (ADS)

    Futakuchi, Y.; Ueyama, M.; Matsumoto, Y.; Yazaki, T.; Hirano, T.; Kominami, Y.; Harazono, Y.; Igarashi, Y.

    2016-12-01

    Methane (CH4) budgets in northern wetlands vary greatly with high spatio-temporal heterogeneity. Owing to limited available data, yet, it is difficult to constrain the CH4 emission from northern wetlands. In this context, we continuously measured CH4 fluxes at two northern wetlands. Measured fluxes were used for constraining the new model that empirically partitioned net CH4 fluxes into the processes of production, oxidation, and transport associated with ebullition, diffusion, and plant, based on the optimization technique. This study reveal the important processes related to the seasonal variations in CH4 emission with the continuous observations and inverse model analysis. The measurements have been conducted at a Sphagnum-dominated cool temperate bog (BBY) since April 2015 using the open-path eddy covariance method and a sub-arctic forested bog on permafrost in University of Alaska Fairbanks (UAF) since May 2016 using three automated chambers by a laser-based gas analyzer (FGGA-24r-EP, Los Gatos Research Inc., USA). In BBY, daily CH4 fluxes ranged from 1.9 nmol m-2 s-1 in early spring to 97.9 nmol m-2 s-1 in mid-summer. Growing-season total CH4 flux was 13 g m-2 yr-1 in 2015. In contrast, CH4 flux at the UAF site was small (0.2 to 1.0 nmol m-2 s-1), and hardly increased since start of the observation. This difference could be caused by the difference in the climate and soil conditions; mean air and soil temperature, and presence of permafrost. For BBY, the seasonal variation of CH4 emission was mostly explained by soil temperature, suggesting that the production was the important controlling process. In mid-summer when soil temperature was high, however, decrease in atmospheric pressure and increase in vegetation greenness stimulated CH4 emission probably through plant-mediated transport and form of bubble, suggesting that the transport processes were important. Based on a preliminary results by the model optimization in BBY site, CH4 fluxes were strongly influenced by the processes associated with production, ebullition, and plant-mediated transports rather than the processes associated with oxidation and diffusion. In this presentation, we will show that the new data-model fusion that we developed is the effective tool for evaluating CH4 fluxes and controlling processes at northern wetlands.

  1. Multi-fidelity Gaussian process regression for prediction of random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less

  2. Advancing coastal ocean modelling, analysis, and prediction for the US Integrated Ocean Observing System

    USGS Publications Warehouse

    Wilkin, John L.; Rosenfeld, Leslie; Allen, Arthur; Baltes, Rebecca; Baptista, Antonio; He, Ruoying; Hogan, Patrick; Kurapov, Alexander; Mehra, Avichal; Quintrell, Josie; Schwab, David; Signell, Richard; Smith, Jane

    2017-01-01

    This paper outlines strategies that would advance coastal ocean modelling, analysis and prediction as a complement to the observing and data management activities of the coastal components of the US Integrated Ocean Observing System (IOOS®) and the Global Ocean Observing System (GOOS). The views presented are the consensus of a group of US-based researchers with a cross-section of coastal oceanography and ocean modelling expertise and community representation drawn from Regional and US Federal partners in IOOS. Priorities for research and development are suggested that would enhance the value of IOOS observations through model-based synthesis, deliver better model-based information products, and assist the design, evaluation, and operation of the observing system itself. The proposed priorities are: model coupling, data assimilation, nearshore processes, cyberinfrastructure and model skill assessment, modelling for observing system design, evaluation and operation, ensemble prediction, and fast predictors. Approaches are suggested to accomplish substantial progress in a 3–8-year timeframe. In addition, the group proposes steps to promote collaboration between research and operations groups in Regional Associations, US Federal Agencies, and the international ocean research community in general that would foster coordination on scientific and technical issues, and strengthen federal–academic partnerships benefiting IOOS stakeholders and end users.

  3. Cross-Sectional Analysis of Longitudinal Mediation Processes.

    PubMed

    O'Laughlin, Kristine D; Martin, Monica J; Ferrer, Emilio

    2018-01-01

    Statistical mediation analysis can help to identify and explain the mechanisms behind psychological processes. Examining a set of variables for mediation effects is a ubiquitous process in the social sciences literature; however, despite evidence suggesting that cross-sectional data can misrepresent the mediation of longitudinal processes, cross-sectional analyses continue to be used in this manner. Alternative longitudinal mediation models, including those rooted in a structural equation modeling framework (cross-lagged panel, latent growth curve, and latent difference score models) are currently available and may provide a better representation of mediation processes for longitudinal data. The purpose of this paper is twofold: first, we provide a comparison of cross-sectional and longitudinal mediation models; second, we advocate using models to evaluate mediation effects that capture the temporal sequence of the process under study. Two separate empirical examples are presented to illustrate differences in the conclusions drawn from cross-sectional and longitudinal mediation analyses. Findings from these examples yielded substantial differences in interpretations between the cross-sectional and longitudinal mediation models considered here. Based on these observations, researchers should use caution when attempting to use cross-sectional data in place of longitudinal data for mediation analyses.

  4. Model-based analysis of the effect of different operating conditions on fouling mechanisms in a membrane bioreactor.

    PubMed

    Sabia, Gianpaolo; Ferraris, Marco; Spagni, Alessandro

    2016-01-01

    This study proposes a model-based evaluation of the effect of different operating conditions with and without pre-denitrification treatment and applying three different solids retention times on the fouling mechanisms involved in membrane bioreactors (MBRs). A total of 11 fouling models obtained from literature were used to fit the transmembrane pressure variations measured in a pilot-scale MBR treating real wastewater for more than 1 year. The results showed that all the models represent reasonable descriptions of the fouling processes in the MBR tested. The model-based analysis confirmed that membrane fouling started by pore blocking (complete blocking model) and by a reduction of the pore diameter (standard blocking) while cake filtration became the dominant fouling mechanism over long-term operation. However, the different fouling mechanisms occurred almost simultaneously making it rather difficult to identify each one. The membrane "history" (i.e. age, lifespan, etc.) seems the most important factor affecting the fouling mechanism more than the applied operating conditions. Nonlinear regression of the most complex models (combined models) evaluated in this study sometimes demonstrated unreliable parameter estimates suggesting that the four basic fouling models (complete, standard, intermediate blocking and cake filtration) contain enough details to represent a reasonable description of the main fouling processes occurring in MBRs.

  5. Hybrid modeling and empirical analysis of automobile supply chain network

    NASA Astrophysics Data System (ADS)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  6. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.

    PubMed

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990) Theory of Visual Attention (TVA), was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.

  7. Recognition errors suggest fast familiarity and slow recollection in rhesus monkeys

    PubMed Central

    Basile, Benjamin M.; Hampton, Robert R.

    2013-01-01

    One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses can demonstrate dual processes has been repeatedly challenged. Here, we present independent converging evidence for the dual-process model from analyses of recognition errors made by rhesus monkeys. Recognition choices were made in three different ways depending on processing duration. Short-latency errors were disproportionately false alarms to familiar lures, suggesting control by familiarity. Medium-latency responses were less likely to be false alarms and were more accurate, suggesting onset of a recollective process that could correctly reject familiar lures. Long-latency responses were guesses. A response deadline increased false alarms, suggesting that limiting processing time weakened the contribution of recollection and strengthened the contribution of familiarity. Together, these findings suggest fast familiarity and slow recollection in monkeys, that monkeys use a “recollect to reject” strategy to countermand false familiarity, and that primate recognition performance is well-characterized by a dual-process model consisting of recollection and familiarity. PMID:23864646

  8. Interagency Collaborative Team Model for Capacity Building to Scale-Up Evidence-Based Practice

    PubMed Central

    Hurlburt, Michael; Aarons, Gregory A; Fettes, Danielle; Willging, Cathleen; Gunderson, Lara; Chaffin, Mark J

    2015-01-01

    Background System-wide scale up of evidence-based practice (EBP) is a complex process. Yet, few strategic approaches exist to support EBP implementation and sustainment across a service system. Building on the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework, we developed and are testing the Interagency Collaborative Team (ICT) process model to implement an evidence-based child neglect intervention (i.e., SafeCare®) within a large children’s service system. The ICT model emphasizes the role of local agency collaborations in creating structural supports for successful implementation. Methods We describe the ICT model and present preliminary qualitative results from use of the implementation model in one large scale EBP implementation. Qualitative interviews were conducted to assess challenges in building system, organization, and home visitor collaboration and capacity to implement the EBP. Data collection and analysis centered on EBP implementation issues, as well as the experiences of home visitors under the ICT model. Results Six notable issues relating to implementation process emerged from participant interviews, including: (a) initial commitment and collaboration among stakeholders, (b) leadership, (c) communication, (d) practice fit with local context, (e) ongoing negotiation and problem solving, and (f) early successes. These issues highlight strengths and areas for development in the ICT model. Conclusions Use of the ICT model led to sustained and widespread use of SafeCare in one large county. Although some aspects of the implementation model may benefit from enhancement, qualitative findings suggest that the ICT process generates strong structural supports for implementation and creates conditions in which tensions between EBP structure and local contextual variations can be resolved in ways that support the expansion and maintenance of an EBP while preserving potential for public health benefit. PMID:27512239

  9. Interagency Collaborative Team Model for Capacity Building to Scale-Up Evidence-Based Practice.

    PubMed

    Hurlburt, Michael; Aarons, Gregory A; Fettes, Danielle; Willging, Cathleen; Gunderson, Lara; Chaffin, Mark J

    2014-04-01

    System-wide scale up of evidence-based practice (EBP) is a complex process. Yet, few strategic approaches exist to support EBP implementation and sustainment across a service system. Building on the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework, we developed and are testing the Interagency Collaborative Team (ICT) process model to implement an evidence-based child neglect intervention (i.e., SafeCare®) within a large children's service system. The ICT model emphasizes the role of local agency collaborations in creating structural supports for successful implementation. We describe the ICT model and present preliminary qualitative results from use of the implementation model in one large scale EBP implementation. Qualitative interviews were conducted to assess challenges in building system, organization, and home visitor collaboration and capacity to implement the EBP. Data collection and analysis centered on EBP implementation issues, as well as the experiences of home visitors under the ICT model. Six notable issues relating to implementation process emerged from participant interviews, including: (a) initial commitment and collaboration among stakeholders, (b) leadership, (c) communication, (d) practice fit with local context, (e) ongoing negotiation and problem solving, and (f) early successes. These issues highlight strengths and areas for development in the ICT model. Use of the ICT model led to sustained and widespread use of SafeCare in one large county. Although some aspects of the implementation model may benefit from enhancement, qualitative findings suggest that the ICT process generates strong structural supports for implementation and creates conditions in which tensions between EBP structure and local contextual variations can be resolved in ways that support the expansion and maintenance of an EBP while preserving potential for public health benefit.

  10. Signal Processing in Periodically Forced Gradient Frequency Neural Networks

    PubMed Central

    Kim, Ji Chul; Large, Edward W.

    2015-01-01

    Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing. PMID:26733858

  11. [Rational bases for cooperation between epidemiologists and mathematicians].

    PubMed

    Favorova, L A; Shatrov, I I

    1977-10-01

    The authors consider rational foundations underlying creatin of realistic models. The principal condition for the successful mathematical modelling is obtaining of the most full value primary materials on the course of the epidemic process. For this purpose the authors suggest definite principles of the methodical approach to the mathematical modelling. Possibilities of the use of mathematical methods for various groups of infections are consideder. Particular attention is paid to the works on the study of the infection risk in "small" collective bodies.

  12. General predictive model of friction behavior regimes for metal contacts based on the formation stability and evolution of nanocrystalline surface films.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Argibay, Nicolas; Cheng, Shengfeng; Sawyer, W. G.

    2015-09-01

    The prediction of macro-scale friction and wear behavior based on first principles and material properties has remained an elusive but highly desirable target for tribologists and material scientists alike. Stochastic processes (e.g. wear), statistically described parameters (e.g. surface topography) and their evolution tend to defeat attempts to establish practical general correlations between fundamental nanoscale processes and macro-scale behaviors. We present a model based on microstructural stability and evolution for the prediction of metal friction regimes, founded on recently established microstructural deformation mechanisms of nanocrystalline metals, that relies exclusively on material properties and contact stress models. We show through complementary experimentalmore » and simulation results that this model overcomes longstanding practical challenges and successfully makes accurate and consistent predictions of friction transitions for a wide range of contact conditions. This framework not only challenges the assumptions of conventional causal relationships between hardness and friction, and between friction and wear, but also suggests a pathway for the design of higher performance metal alloys.« less

  13. Extending SME to Handle Large-Scale Cognitive Modeling.

    PubMed

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  14. Statistical prediction with Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1989-01-01

    A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.

  15. Dietary change and stable isotopes: a model of growth and dormancy in cave bears.

    PubMed Central

    Lidén, K; Angerbjörn, A

    1999-01-01

    In order to discuss dietary change over time by the use of stable isotopes, it is necessary to sort out the underlying processes in isotopic variation. Together with the dietary signal other processes have been investigated, namely metabolic processes, collagen turnover and physical growth. However, growth and collagen turnover time have so far been neglected in dietary reconstruction based on stable isotopes. An earlier study suggested that cave bears (Ursus spelaeus) probably gave birth to cubs during dormancy. We provide an estimate of the effect on stable isotopes of growth and metabolism and discuss collagen turnover in a population of cave bears. Based on a quantitative model, we hypothesized that bear cubs lactated their mothers during their first and second winters, but were fed solid food together with lactation during their first summer. This demonstrates the need to include physical growth, metabolism and collagen turnover in dietary reconstruction. Whereas the effects of diet and metabolism are due to fractionation, growth and collagen turnover are dilution processes. PMID:10518325

  16. Investigation of signal models and methods for evaluating structures of processing telecommunication information exchange systems under acoustic noise conditions

    NASA Astrophysics Data System (ADS)

    Kropotov, Y. A.; Belov, A. A.; Proskuryakov, A. Y.; Kolpakov, A. A.

    2018-05-01

    The paper considers models and methods for estimating signals during the transmission of information messages in telecommunication systems of audio exchange. One-dimensional probability distribution functions that can be used to isolate useful signals, and acoustic noise interference are presented. An approach to the estimation of the correlation and spectral functions of the parameters of acoustic signals is proposed, based on the parametric representation of acoustic signals and the components of the noise components. The paper suggests an approach to improving the efficiency of interference cancellation and highlighting the necessary information when processing signals from telecommunications systems. In this case, the suppression of acoustic noise is based on the methods of adaptive filtering and adaptive compensation. The work also describes the models of echo signals and the structure of subscriber devices in operational command telecommunications systems.

  17. Kinetics of a gas adsorption compressor

    NASA Technical Reports Server (NTRS)

    Chan, C. K.; Tward, E.; Elleman, D. D.

    1984-01-01

    Chan (1981) has suggested that a process based on gas adsorption could be used as a means to drive a Joule-Thomson (J-T) device. The resulting system has several advantages. It is heat powered, it has no sealing, there are no mechanical moving parts, and no active control is required. In the present investigation, a two-phase model is used to analyze the transients of a gas adsorption compressor. The modeling of the adsorption process is based on a consideration of complete thermal and mechanical equilibrium between the gaseous phase and the adsorbed gas phase. The experimental arrangement for two sets of kinetic tests is discussed, and data regarding the experimental results are presented in graphs. For a theoretical study, a two-phase model was developed to predict the transient behavior of the compressor. A computer code was written to solve the governing equations with the aid of a standard forward marching predictor-corrector method.

  18. Polar Processes in a 50-year Simulation of Stratospheric Chemistry and Transport

    NASA Technical Reports Server (NTRS)

    Kawa, S.R.; Douglass, A. R.; Patrick, L. C.; Allen, D. R.; Randall, C. E.

    2004-01-01

    The unique chemical, dynamical, and microphysical processes that occur in the winter polar lower stratosphere are expected to interact strongly with changing climate and trace gas abundances. Significant changes in ozone have been observed and prediction of future ozone and climate interactions depends on modeling these processes successfully. We have conducted an off-line model simulation of the stratosphere for trace gas conditions representative of 1975-2025 using meteorology from the NASA finite-volume general circulation model. The objective of this simulation is to examine the sensitivity of stratospheric ozone and chemical change to varying meteorology and trace gas inputs. This presentation will examine the dependence of ozone and related processes in polar regions on the climatological and trace gas changes in the model. The model past performance is base-lined against available observations, and a future ozone recovery scenario is forecast. Overall the model ozone simulation is quite realistic, but initial analysis of the detailed evolution of some observable processes suggests systematic shortcomings in our description of the polar chemical rates and/or mechanisms. Model sensitivities, strengths, and weaknesses will be discussed with implications for uncertainty and confidence in coupled climate chemistry predictions.

  19. The resilience and functional role of moss in boreal and arctic ecosystems.

    PubMed

    Turetsky, M R; Bond-Lamberty, B; Euskirchen, E; Talbot, J; Frolking, S; McGuire, A D; Tuittila, E-S

    2012-10-01

    Mosses in northern ecosystems are ubiquitous components of plant communities, and strongly influence nutrient, carbon and water cycling. We use literature review, synthesis and model simulations to explore the role of mosses in ecological stability and resilience. Moss community responses to disturbance showed all possible responses (increases, decreases, no change) within most disturbance categories. Simulations from two process-based models suggest that northern ecosystems would need to experience extreme perturbation before mosses were eliminated. But simulations with two other models suggest that loss of moss will reduce soil carbon accumulation primarily by influencing decomposition rates and soil nitrogen availability. It seems clear that mosses need to be incorporated into models as one or more plant functional types, but more empirical work is needed to determine how to best aggregate species. We highlight several issues that have not been adequately explored in moss communities, such as functional redundancy and singularity, relationships between response and effect traits, and parameter vs conceptual uncertainty in models. Mosses play an important role in several ecosystem processes that play out over centuries - permafrost formation and thaw, peat accumulation, development of microtopography - and there is a need for studies that increase our understanding of slow, long-term dynamical processes. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.

  20. The resilience and functional role of moss in boreal and arctic ecosystems

    USGS Publications Warehouse

    Turetsky, M.; Bond-Lamberty, B.; Euskirchen, E.S.; Talbot, J. J.; Frolking, S.; McGuire, A.D.; Tuittila, E.S.

    2012-01-01

    Mosses in northern ecosystems are ubiquitous components of plant communities, and strongly influence nutrient, carbon and water cycling. We use literature review, synthesis and model simulations to explore the role of mosses in ecological stability and resilience. Moss community responses to disturbance showed all possible responses (increases, decreases, no change) within most disturbance categories. Simulations from two process-based models suggest that northern ecosystems would need to experience extreme perturbation before mosses were eliminated. But simulations with two other models suggest that loss of moss will reduce soil carbon accumulation primarily by influencing decomposition rates and soil nitrogen availability. It seems clear that mosses need to be incorporated into models as one or more plant functional types, but more empirical work is needed to determine how to best aggregate species. We highlight several issues that have not been adequately explored in moss communities, such as functional redundancy and singularity, relationships between response and effect traits, and parameter vs conceptual uncertainty in models. Mosses play an important role in several ecosystem processes that play out over centuries – permafrost formation and thaw, peat accumulation, development of microtopography – and there is a need for studies that increase our understanding of slow, long-term dynamical processes.

  1. Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading

    PubMed Central

    O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.

    2017-01-01

    Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363

  2. Future benefits and applications of intelligent on-board processing to VSAT services

    NASA Technical Reports Server (NTRS)

    Price, Kent M.; Kwan, Robert K.; Edward, Ron; Faris, F.; Inukai, Tom

    1992-01-01

    The trends and roles of VSAT services in the year 2010 time frame are examined based on an overall network and service model for that period. An estimate of the VSAT traffic is then made and the service and general network requirements are identified. In order to accommodate these traffic needs, four satellite VSAT architectures based on the use of fixed or scanning multibeam antennas in conjunction with IF switching or onboard regeneration and baseband processing are suggested. The performance of each of these architectures is assessed and the key enabling technologies are identified.

  3. Health behavior change in advance care planning: an agent-based model.

    PubMed

    Ernecoff, Natalie C; Keane, Christopher R; Albert, Steven M

    2016-02-29

    A practical and ethical challenge in advance care planning research is controlling and intervening on human behavior. Additionally, observing dynamic changes in advance care planning (ACP) behavior proves difficult, though tracking changes over time is important for intervention development. Agent-based modeling (ABM) allows researchers to integrate complex behavioral data about advance care planning behaviors and thought processes into a controlled environment that is more easily alterable and observable. Literature to date has not addressed how best to motivate individuals, increase facilitators and reduce barriers associated with ACP. We aimed to build an ABM that applies the Transtheoretical Model of behavior change to ACP as a health behavior and accurately reflects: 1) the rates at which individuals complete the process, 2) how individuals respond to barriers, facilitators, and behavioral variables, and 3) the interactions between these variables. We developed a dynamic ABM of the ACP decision making process based on the stages of change posited by the Transtheoretical Model. We integrated barriers, facilitators, and other behavioral variables that agents encounter as they move through the process. We successfully incorporated ACP barriers, facilitators, and other behavioral variables into our ABM, forming a plausible representation of ACP behavior and decision-making. The resulting distributions across the stages of change replicated those found in the literature, with approximately half of participants in the action-maintenance stage in both the model and the literature. Our ABM is a useful method for representing dynamic social and experiential influences on the ACP decision making process. This model suggests structural interventions, e.g. increasing access to ACP materials in primary care clinics, in addition to improved methods of data collection for behavioral studies, e.g. incorporating longitudinal data to capture behavioral dynamics.

  4. Scaling Dissolved Nutrient Removal in River Networks: A Comparative Modeling Investigation

    NASA Astrophysics Data System (ADS)

    Ye, Sheng; Reisinger, Alexander J.; Tank, Jennifer L.; Baker, Michelle A.; Hall, Robert O.; Rosi, Emma J.; Sivapalan, Murugesu

    2017-11-01

    Along the river network, water, sediment, and nutrients are transported, cycled, and altered by coupled hydrological and biogeochemical processes. Our current understanding of the rates and processes controlling the cycling and removal of dissolved inorganic nutrients in river networks is limited due to a lack of empirical measurements in large, (nonwadeable), rivers. The goal of this paper was to develop a coupled hydrological and biogeochemical process model to simulate nutrient uptake at the network scale during summer base flow conditions. The model was parameterized with literature values from headwater streams, and empirical measurements made in 15 rivers with varying hydrological, biological, and topographic characteristics, to simulate nutrient uptake at the network scale. We applied the coupled model to 15 catchments describing patterns in uptake for three different solutes to determine the role of rivers in network-scale nutrient cycling. Model simulation results, constrained by empirical data, suggested that rivers contributed proportionally more to nutrient removal than headwater streams given the fraction of their length represented in a network. In addition, variability of nutrient removal patterns among catchments was varied among solutes, and as expected, was influenced by nutrient concentration and discharge. Net ammonium uptake was not significantly correlated with any environmental descriptor. In contrast, net daily nitrate removal was linked to suspended chlorophyll a (an indicator of primary producers) and land use characteristics. Finally, suspended sediment characteristics and agricultural land use were correlated with net daily removal of soluble reactive phosphorus, likely reflecting abiotic sorption dynamics. Rivers are understudied relative to streams, and our model suggests that rivers can contribute more to network-scale nutrient removal than would be expected based upon their representative fraction of network channel length.

  5. Examining the effect of down regulation under high [CO2] on the growth of soybean assimilating a semi process-based model and FACE data

    NASA Astrophysics Data System (ADS)

    Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2011-12-01

    The actual impact of elevated [CO2] with the interaction of the other climatic factors on the crop growth is still debated. In many process-based crop models, the response of photosynthesis per single leaf to environmental factors is basically described using the biochemical model of Farquhar et al. (1980). However, the decline in photosynthetic enhancement known as down regulation has not been taken into account. On the other hand, the mechanisms causing photosynthetic down regulation is still unknown, which makes it difficult to include the effect of down regulation into process-based crop models. The current results of Free-air CO2 enrichment (FACE) experiments have reported the effect of down regulation under actual environments. One of the effective approaches to involve these results into future crop yield prediction is developing a semi process-based crop growth model, which includes the effect of photosynthetic down regulation as a statistical model, and assimilating the data obtained in FACE experiments. In this study, we statistically estimated the parameters of a semi process-based model for soybean growth ('SPM-soybean') using a hierarchical Baysian method with the FACE data on soybeans (Morgan et al. 2005). We also evaluated the effect of down regulation on the soybean yield in future climatic conditions. The model selection analysis showed that the effective correction to the overestimation of the Farquhar's biochemical C3 model was to reduce the maximum rate of carboxylation (Vcmax) under elevated [CO2]. However, interestingly, the difference in the estimated final crop yields between the corrected model and the non-corrected model was very slight (Fig.1a) for future projection under climate change scenario (Miroc-ESM). This was due to that the reduction in Vcmax also brought about the reduction of the base dark respiration rate of leaves. Because the dark respiration rate exponentially increases with temperature, the slight difference in base respiration rate becomes a large difference under high temperature under the future climate scenarios. In other words, if the temperature rise is very small or zero under elevated [CO2] condition, the effect of down regulation significantly appears (Fig.1b). This result suggest that further experimental data that considering high CO2 effect and high temperature effect in field conditions should be important and elaborate the model projection of the future crop yield through data assimilation method.

  6. A simple model for remineralization of subsurface lesions in tooth enamel

    NASA Astrophysics Data System (ADS)

    Christoffersen, J.; Christoffersen, M. R.; Arends, J.

    1982-12-01

    A model for remineralization of subsurface lesions in tooth enamel is presented. The important assumption on which the model is based is that the rate-controlling process is the crystal surface process by which ions are incorporated in the crystallites; that is, the transport of ions through small holes in the so-called intact surface layer does not influence the rate of mineral uptake at the crystal surface. Further, the density of mineral in the lesion is assumed to increase down the lesion, when the remineralization process is started. It is shown that the dimension of the initial holes in the enamel surface layer must be larger than the dimension of the individual crystallites in order to prevent the formation of arrested lesions. Theoretical expressions for the progress of remineralization are given. The suggested model emphasizes the need for measurements of mineral densities in the lesion, prior to, and during the lesion repair.

  7. Adsorption of diclofenac and nimesulide on activated carbon: Statistical physics modeling and effect of adsorbate size

    NASA Astrophysics Data System (ADS)

    Sellaoui, Lotfi; Mechi, Nesrine; Lima, Éder Cláudio; Dotto, Guilherme Luiz; Ben Lamine, Abdelmottaleb

    2017-10-01

    Based on statistical physics elements, the equilibrium adsorption of diclofenac (DFC) and nimesulide (NM) on activated carbon was analyzed by a multilayer model with saturation. The paper aimed to describe experimentally and theoretically the adsorption process and study the effect of adsorbate size using the model parameters. From numerical simulation, the number of molecules per site showed that the adsorbate molecules (DFC and NM) were mostly anchored in both sides of the pore walls. The receptor sites density increase suggested that additional sites appeared during the process, to participate in DFC and NM adsorption. The description of the adsorption energy behavior indicated that the process was physisorption. Finally, by a model parameters correlation, the size effect of the adsorbate was deduced indicating that the molecule dimension has a negligible effect on the DFC and NM adsorption.

  8. Physiologically Based Absorption Modeling to Design Extended-Release Clinical Products for an Ester Prodrug.

    PubMed

    Ding, Xuan; Day, Jeffrey S; Sperry, David C

    2016-11-01

    Absorption modeling has demonstrated its great value in modern drug product development due to its utility in understanding and predicting in vivo performance. In this case, we integrated physiologically based modeling in the development processes to effectively design extended-release (ER) clinical products for an ester prodrug LY545694. By simulating the trial results of immediate-release products, we delineated complex pharmacokinetics due to prodrug conversion and established an absorption model to describe the clinical observations. This model suggested the prodrug has optimal biopharmaceutical properties to warrant developing an ER product. Subsequently, we incorporated release profiles of prototype ER tablets into the absorption model to simulate the in vivo performance of these products observed in an exploratory trial. The models suggested that the absorption of these ER tablets was lower than the IR products because the extended release from the formulations prevented the drug from taking advantage of the optimal absorption window. Using these models, we formed a strategy to optimize the ER product to minimize the impact of the absorption window limitation. Accurate prediction of the performance of these optimized products by modeling was confirmed in a third clinical trial.

  9. Dynamic molecular confinement in the plasma membrane by microdomains and the cytoskeleton meshwork.

    PubMed

    Lenne, Pierre-François; Wawrezinieck, Laure; Conchonaud, Fabien; Wurtz, Olivier; Boned, Annie; Guo, Xiao-Jun; Rigneault, Hervé; He, Hai-Tao; Marguet, Didier

    2006-07-26

    It is by now widely recognized that cell membranes show complex patterns of lateral organization. Two mechanisms involving either a lipid-dependent (microdomain model) or cytoskeleton-based (meshwork model) process are thought to be responsible for these plasma membrane organizations. In the present study, fluorescence correlation spectroscopy measurements on various spatial scales were performed in order to directly identify and characterize these two processes in live cells with a high temporal resolution, without any loss of spatial information. Putative raft markers were found to be dynamically compartmented within tens of milliseconds into small microdomains (Ø <120 nm) that are sensitive to the cholesterol and sphingomyelin levels, whereas actin-based cytoskeleton barriers are responsible for the confinement of the transferrin receptor protein. A free-like diffusion was observed when both the lipid-dependent and cytoskeleton-based organizations were disrupted, which suggests that these are two main compartmentalizing forces at work in the plasma membrane.

  10. Redefining delusion based on studies of subjective paranormal ideation.

    PubMed

    Houran, James; Lange, Rense

    2004-04-01

    The DSM-IV definition of delusion is argued to be unsatisfactory because it does not explain the mechanism for delusion formation and maintenance, it implies that such beliefs are necessarily dysfunctional (pathological), it underestimates the social component to some delusions, and it is inconsistent with research indicating that delusions can be modified through techniques such as contradiction, confrontation, and cognitive-behavioral therapy. However, a well-replicated mathematical model of magical/delusional thinking based on a study of paranormal beliefs and experiences is consistent with the hypothesis that attributional processes play a central role in delusion formation and maintenance. The model suggests attributional processes serve the adaptive function of reducing fear associated with ambiguous stimuli and delusional thinking is on a continuum with nonpathological forms. Based on this collective research an amendment to the definition of delusion is proposed and its clinical implications are addressed.

  11. Rationality versus reality: the challenges of evidence-based decision making for health policy makers

    PubMed Central

    2010-01-01

    Background Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. Discussion We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. Summary In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved. PMID:20504357

  12. Rationality versus reality: the challenges of evidence-based decision making for health policy makers.

    PubMed

    McCaughey, Deirdre; Bruning, Nealia S

    2010-05-26

    Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved.

  13. Modeling, Monitoring and Fault Diagnosis of Spacecraft Air Contaminants

    NASA Technical Reports Server (NTRS)

    Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J.

    1996-01-01

    Progress and results in the development of an integrated air quality modeling, monitoring, fault detection, and isolation system are presented. The focus was on development of distributed models of the air contaminants transport, the study of air quality monitoring techniques based on the model of transport process and on-line contaminant concentration measurements, and sensor placement. Different approaches to the modeling of spacecraft air contamination are discussed, and a three-dimensional distributed parameter air contaminant dispersion model applicable to both laminar and turbulent transport is proposed. A two-dimensional approximation of a full scale transport model is also proposed based on the spatial averaging of the three dimensional model over the least important space coordinate. A computer implementation of the transport model is considered and a detailed development of two- and three-dimensional models illustrated by contaminant transport simulation results is presented. The use of a well established Kalman filtering approach is suggested as a method for generating on-line contaminant concentration estimates based on both real time measurements and the model of contaminant transport process. It is shown that high computational requirements of the traditional Kalman filter can render difficult its real-time implementation for high-dimensional transport model and a novel implicit Kalman filtering algorithm is proposed which is shown to lead to an order of magnitude faster computer implementation in the case of air quality monitoring.

  14. Moving forward socio-economically focused models of deforestation.

    PubMed

    Dezécache, Camille; Salles, Jean-Michel; Vieilledent, Ghislain; Hérault, Bruno

    2017-09-01

    Whilst high-resolution spatial variables contribute to a good fit of spatially explicit deforestation models, socio-economic processes are often beyond the scope of these models. Such a low level of interest in the socio-economic dimension of deforestation limits the relevancy of these models for decision-making and may be the cause of their failure to accurately predict observed deforestation trends in the medium term. This study aims to propose a flexible methodology for taking into account multiple drivers of deforestation in tropical forested areas, where the intensity of deforestation is explicitly predicted based on socio-economic variables. By coupling a model of deforestation location based on spatial environmental variables with several sub-models of deforestation intensity based on socio-economic variables, we were able to create a map of predicted deforestation over the period 2001-2014 in French Guiana. This map was compared to a reference map for accuracy assessment, not only at the pixel scale but also over cells ranging from 1 to approximately 600 sq. km. Highly significant relationships were explicitly established between deforestation intensity and several socio-economic variables: population growth, the amount of agricultural subsidies, gold and wood production. Such a precise characterization of socio-economic processes allows to avoid overestimation biases in high deforestation areas, suggesting a better integration of socio-economic processes in the models. Whilst considering deforestation as a purely geographical process contributes to the creation of conservative models unable to effectively assess changes in the socio-economic and political contexts influencing deforestation trends, this explicit characterization of the socio-economic dimension of deforestation is critical for the creation of deforestation scenarios in REDD+ projects. © 2017 John Wiley & Sons Ltd.

  15. Drift diffusion model of reward and punishment learning in schizophrenia: Modeling and experimental data.

    PubMed

    Moustafa, Ahmed A; Kéri, Szabolcs; Somlai, Zsuzsanna; Balsdon, Tarryn; Frydecka, Dorota; Misiak, Blazej; White, Corey

    2015-09-15

    In this study, we tested reward- and punishment learning performance using a probabilistic classification learning task in patients with schizophrenia (n=37) and healthy controls (n=48). We also fit subjects' data using a Drift Diffusion Model (DDM) of simple decisions to investigate which components of the decision process differ between patients and controls. Modeling results show between-group differences in multiple components of the decision process. Specifically, patients had slower motor/encoding time, higher response caution (favoring accuracy over speed), and a deficit in classification learning for punishment, but not reward, trials. The results suggest that patients with schizophrenia adopt a compensatory strategy of favoring accuracy over speed to improve performance, yet still show signs of a deficit in learning based on negative feedback. Our data highlights the importance of applying fitting models (particularly drift diffusion models) to behavioral data. The implications of these findings are discussed relative to theories of schizophrenia and cognitive processing. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. A dynamic model of reasoning and memory.

    PubMed

    Hawkins, Guy E; Hayes, Brett K; Heit, Evan

    2016-02-01

    Previous models of category-based induction have neglected how the process of induction unfolds over time. We conceive of induction as a dynamic process and provide the first fine-grained examination of the distribution of response times observed in inductive reasoning. We used these data to develop and empirically test the first major quantitative modeling scheme that simultaneously accounts for inductive decisions and their time course. The model assumes that knowledge of similarity relations among novel test probes and items stored in memory drive an accumulation-to-bound sequential sampling process: Test probes with high similarity to studied exemplars are more likely to trigger a generalization response, and more rapidly, than items with low exemplar similarity. We contrast data and model predictions for inductive decisions with a recognition memory task using a common stimulus set. Hierarchical Bayesian analyses across 2 experiments demonstrated that inductive reasoning and recognition memory primarily differ in the threshold to trigger a decision: Observers required less evidence to make a property generalization judgment (induction) than an identity statement about a previously studied item (recognition). Experiment 1 and a condition emphasizing decision speed in Experiment 2 also found evidence that inductive decisions use lower quality similarity-based information than recognition. The findings suggest that induction might represent a less cautious form of recognition. We conclude that sequential sampling models grounded in exemplar-based similarity, combined with hierarchical Bayesian analysis, provide a more fine-grained and informative analysis of the processes involved in inductive reasoning than is possible solely through examination of choice data. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  17. Microbes as Engines of Ecosystem Function: When Does Community Structure Enhance Predictions of Ecosystem Processes?

    PubMed Central

    Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J. M.; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C.; Glanville, Helen C.; Jones, Davey L.; Angel, Roey; Salminen, Janne; Newton, Ryan J.; Bürgmann, Helmut; Ingram, Lachlan J.; Hamer, Ute; Siljanen, Henri M. P.; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C.; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C.; Lopes, Ana R.; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S.; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S.; Basiliko, Nathan; Nemergut, Diana R.

    2016-01-01

    Microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology. PMID:26941732

  18. Microbes as Engines of Ecosystem Function: When Does Community Structure Enhance Predictions of Ecosystem Processes?

    PubMed

    Graham, Emily B; Knelman, Joseph E; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J M; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C; Glanville, Helen C; Jones, Davey L; Angel, Roey; Salminen, Janne; Newton, Ryan J; Bürgmann, Helmut; Ingram, Lachlan J; Hamer, Ute; Siljanen, Henri M P; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C; Lopes, Ana R; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S; Basiliko, Nathan; Nemergut, Diana R

    2016-01-01

    Microorganisms are vital in mediating the earth's biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: 'When do we need to understand microbial community structure to accurately predict function?' We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.

  19. In silico model-based inference: a contemporary approach for hypothesis testing in network biology

    PubMed Central

    Klinke, David J.

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900’s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. PMID:25139179

  20. In silico model-based inference: a contemporary approach for hypothesis testing in network biology.

    PubMed

    Klinke, David J

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. © 2014 American Institute of Chemical Engineers.

  1. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  2. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process.

    PubMed

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  3. A literature review on business process modelling: new frontiers of reusability

    NASA Astrophysics Data System (ADS)

    Aldin, Laden; de Cesare, Sergio

    2011-08-01

    Business process modelling (BPM) has become fundamental for modern enterprises due to the increasing rate of organisational change. As a consequence, business processes need to be continuously (re-)designed as well as subsequently aligned with the corresponding enterprise information systems. One major problem associated with the design of business processes is reusability. Reuse of business process models has the potential of increasing the efficiency and effectiveness of BPM. This article critically surveys the existing literature on the problem of BPM reusability and more specifically on that State-of-the-Art research that can provide or suggest the 'elements' required for the development of a methodology aimed at discovering reusable conceptual artefacts in the form of patterns. The article initially clarifies the definitions of business process and business process model; then, it sets out to explore the previous research conducted in areas that have an impact on reusability in BPM. The article concludes by distilling directions for future research towards the development of apatterns-based approach to BPM; an approach that brings together the contributions made by the research community in the areas of process mining and discovery, declarative approaches and ontologies.

  4. Peer Review for EPA's Biologically Based Dose-Response ...

    EPA Pesticide Factsheets

    EPA is developing a regulation for perchlorate in drinking water. As part the regulatory process EPA must develop a Maximum Contaminant Level Goal (MCLG). FDA and EPA scientists developed a biologically based dose-response (BBDR) model to assist in deriving the MCLG. This model is designed to determine under what conditions of iodine nutrition and exposure to perchlorate across sensitive lifestages would result in low serum free and total thyroxine (hypothyroxinemia). EPA is undertaking a peer review to provide a focused, objective independent peer evaluation of the draft model and its model results report. EPA is undertaking a peer review to provide a focused, objective independent peer evaluation of the draft model and its model results report. Peer review is an important component of the scientific process. The criticism, suggestions, and new ideas provided by the peer reviewers stimulate creative thought, strengthen the interpretation of the reviewed material, and confer credibility on the product. The peer review objective is to provide advice to EPA on steps that will yield a highly credible scientific product that is supported by the scientific community and a defensible perchlorate MCLG.

  5. Digital forensics: an analytical crime scene procedure model (ACSPM).

    PubMed

    Bulbul, Halil Ibrahim; Yavuzcan, H Guclu; Ozel, Mesut

    2013-12-10

    In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner safeguarding the accuracy and reliability of the evidence, law enforcement and digital forensic units must establish and maintain an effective quality assurance system. The very first part of this system is standard operating procedures (SOP's) and/or models, conforming chain of custody requirements, those rely on digital forensics "process-phase-procedure-task-subtask" sequence. An acceptable and thorough Digital Forensics (DF) process depends on the sequential DF phases, and each phase depends on sequential DF procedures, respectively each procedure depends on tasks and subtasks. There are numerous amounts of DF Process Models that define DF phases in the literature, but no DF model that defines the phase-based sequential procedures for crime scene identified. An analytical crime scene procedure model (ACSPM) that we suggest in this paper is supposed to fill in this gap. The proposed analytical procedure model for digital investigations at a crime scene is developed and defined for crime scene practitioners; with main focus on crime scene digital forensic procedures, other than that of whole digital investigation process and phases that ends up in a court. When reviewing the relevant literature and interrogating with the law enforcement agencies, only device based charts specific to a particular device and/or more general perspective approaches to digital evidence management models from crime scene to courts are found. After analyzing the needs of law enforcement organizations and realizing the absence of crime scene digital investigation procedure model for crime scene activities we decided to inspect the relevant literature in an analytical way. The outcome of this inspection is our suggested model explained here, which is supposed to provide guidance for thorough and secure implementation of digital forensic procedures at a crime scene. In digital forensic investigations each case is unique and needs special examination, it is not possible to cover every aspect of crime scene digital forensics, but the proposed procedure model is supposed to be a general guideline for practitioners. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. When University Faculty Retire: A Study of the Transition Process.

    ERIC Educational Resources Information Center

    Pappas, John G.; Goodman, Jane

    This study examined the retirement transitions of college faculty based on the Schlossberg (1984) model, which suggests that successful coping depends on an evaluation of the retiree's unique situation, the qualities of the individual, the support available, and the strategies employed. A total of 55 emeritus faculty from the College of Education…

  7. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    ERIC Educational Resources Information Center

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  8. Mindfulness and Behavioral Parent Training: Commentary

    ERIC Educational Resources Information Center

    Eyberg, Sheila M.; Graham-Pole, John R.

    2005-01-01

    We review the description of mindfulness-based parent training (MBPT) and the argument that mindfulness practice offers a way to bring behavioral parent training (BPT) in line with current empirical knowledge. The strength of the proposed MBPT model is the attention it draws to process issues in BPT. We suggest, however, that it may not be…

  9. Reading in a Root-Based-Morphology Language: The Case of Arabic.

    ERIC Educational Resources Information Center

    Abu-Rabia, S.

    2002-01-01

    Reviews the reading process in Arabic as a function of vowels and sentence context. Reviews reading accuracy and reading comprehension results in light of cross-cultural reading to develop a more comprehensive reading theory. Presents the phonology, morphology and sentence context of Arabic in two suggested reading models for poor/beginner Arabic…

  10. Validating and Extending the Three Process Model of Alertness in Airline Operations

    PubMed Central

    Ingre, Michael; Van Leeuwen, Wessel; Klemets, Tomas; Ullvetter, Christer; Hough, Stephen; Kecklund, Göran; Karlsson, David; Åkerstedt, Torbjörn

    2014-01-01

    Sleepiness and fatigue are important risk factors in the transport sector and bio-mathematical sleepiness, sleep and fatigue modeling is increasingly becoming a valuable tool for assessing safety of work schedules and rosters in Fatigue Risk Management Systems (FRMS). The present study sought to validate the inner workings of one such model, Three Process Model (TPM), on aircrews and extend the model with functions to model jetlag and to directly assess the risk of any sleepiness level in any shift schedule or roster with and without knowledge of sleep timings. We collected sleep and sleepiness data from 136 aircrews in a real life situation by means of an application running on a handheld touch screen computer device (iPhone, iPod or iPad) and used the TPM to predict sleepiness with varying level of complexity of model equations and data. The results based on multilevel linear and non-linear mixed effects models showed that the TPM predictions correlated with observed ratings of sleepiness, but explorative analyses suggest that the default model can be improved and reduced to include only two-processes (S+C), with adjusted phases of the circadian process based on a single question of circadian type. We also extended the model with a function to model jetlag acclimatization and with estimates of individual differences including reference limits accounting for 50%, 75% and 90% of the population as well as functions for predicting the probability of any level of sleepiness for ecological assessment of absolute and relative risk of sleepiness in shift systems for safety applications. PMID:25329575

  11. The Rational Patient and Beyond: Implications for Treatment Adherence in People with Psychiatric Disabilities

    PubMed Central

    Corrigan, Patrick W.; Rüsch, Nicolas; Ben-Zeev, Dror; Sher, Tamara

    2014-01-01

    Purpose/Objective Many people with psychiatric disabilities do not benefit from evidence-based practices because they often do not seek out or fully adhere to them. One way psychologists have made sense of this rehabilitation and health decision process and subsequent behaviors (of which adherence might be viewed as one) is by proposing a “rational patient;” namely, that decisions are made deliberatively by weighing perceived costs and benefits of intervention options. Social psychological research, however, suggests limitations to a rational patient theory that impact models of health decision making. Design The research literature was reviewed for studies of rational patient models and alternative theories with empirical support. Special focus was on models specifically related to decisions about rehabilitation strategies for psychiatric disability. Results Notions of the rational patient evolved out of several psychological models including the health belief model, protection motivation theory, and theory of planned behavior. A variety of practice strategies evolved to promote rational decision making. However, research also suggests limitations to rational deliberations of health. (1) Rather than carefully and consciously considered, many health decisions are implicit, potentially occurring outside awareness. (2) Decisions are not always planful; often it is the immediate exigencies of a context rather than an earlier balance of costs and benefits that has the greatest effects. (3) Cool cognitions often do not dictate the process; emotional factors have an important role in health decisions. Each of these limitations suggests additional practice strategies that facilitate a person’s health decisions. Conclusions/Implications Old models of rational decision making need to be supplanted by multi-process models that explain supra-deliberative factors in health decisions and behaviors. PMID:24446671

  12. Non-linear processing of a linear speech stream: The influence of morphological structure on the recognition of spoken Arabic words.

    PubMed

    Gwilliams, L; Marantz, A

    2015-08-01

    Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data

    PubMed Central

    Gritsenko, Alexey A.; Hulsman, Marc; Reinders, Marcel J. T.; de Ridder, Dick

    2015-01-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates. PMID:26275099

  14. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.

    PubMed

    Gritsenko, Alexey A; Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick

    2015-08-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.

  15. Novel approach for solid state cryocoolers.

    PubMed

    Volpi, Azzurra; Di Lieto, Alberto; Tonelli, Mauro

    2015-04-06

    Laser cooling in solids is based on anti-Stokes luminescence, via the annihilation of lattice phonons needed to compensate the energy of emitted photons, higher than absorbed ones. Usually the anti-Stokes process is obtained using a rare-earth active ion, like Yb. In this work we demonstrate a novel approach for optical cooling based not only to Yb anti-Stokes cycle but also to virtuous energy-transfer processes from the active ion, obtaining an increase of the cooling efficiency of a single crystal LiYF(4) (YLF) doped Yb at 5at.% with a controlled co-doping of 0.0016% Thulium ions. A model for efficiency enhancement based on Yb-Tm energy transfer is also suggested.

  16. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    NASA Astrophysics Data System (ADS)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.

  17. Simulation of Healing Threshold in Strain-Induced Inflammation Through a Discrete Informatics Model.

    PubMed

    Ibrahim, Israr Bin M; Sarma O V, Sanjay; Pidaparti, Ramana M

    2018-05-01

    Respiratory diseases such as asthma and acute respiratory distress syndrome as well as acute lung injury involve inflammation at the cellular level. The inflammation process is very complex and is characterized by the emergence of cytokines along with other changes in cellular processes. Due to the complexity of the various constituents that makes up the inflammation dynamics, it is necessary to develop models that can complement experiments to fully understand inflammatory diseases. In this study, we developed a discrete informatics model based on cellular automata (CA) approach to investigate the influence of elastic field (stretch/strain) on the dynamics of inflammation and account for probabilistic adaptation based on statistical interpretation of existing experimental data. Our simulation model investigated the effects of low, medium, and high strain conditions on inflammation dynamics. Results suggest that the model is able to indicate the threshold of innate healing of tissue as a response to strain experienced by the tissue. When strain is under the threshold, the tissue is still capable of adapting its structure to heal the damaged part. However, there exists a strain threshold where healing capability breaks down. The results obtained demonstrate that the developed discrete informatics based CA model is capable of modeling and giving insights into inflammation dynamics parameters under various mechanical strain/stretch environments.

  18. Disagreement between Hydrological and Land Surface models on the water budgets in the Arctic: why is this and which of them is right?

    NASA Astrophysics Data System (ADS)

    Blyth, E.; Martinez-de la Torre, A.; Ellis, R.; Robinson, E.

    2017-12-01

    The fresh-water budget of the Artic region has a diverse range of impacts: the ecosystems of the region, ocean circulation response to Arctic freshwater, methane emissions through changing wetland extent as well as the available fresh water for human consumption. But there are many processes that control the budget including a seasonal snow packs building and thawing, freezing soils and permafrost, extensive organic soils and large wetland systems. All these processes interact to create a complex hydrological system. In this study we examine a suite of 10 models that bring all those processes together in a 25 year reanalysis of the global water budget. We assess their performance in the Arctic region. There are two approaches to modelling fresh-water flows at large scales, referred to here as `Hydrological' and `Land Surface' models. While both approaches include a physically based model of the water stores and fluxes, the Land Surface models links the water flows to an energy-based model for processes such as snow melt and soil freezing. This study will analyse the impact of that basic difference on the regional patterns of evapotranspiration, runoff generation and terrestrial water storage. For the evapotranspiration, the Hydrological models tend to have a bigger spatial range in the model bias (difference to observations), implying greater errors compared to the Land-Surface models. For instance, some regions such as Eastern Siberia have consistently lower Evaporation in the Hydrological models than the Land Surface models. For the Runoff however, the results are the other way round with a slightly higher spatial range in bias for the Land Surface models implying greater errors than the Hydrological models. A simple analysis would suggest that Hydrological models are designed to get the runoff right, while Land Surface models designed to get the evapotranspiration right. Tracing the source of the difference suggests that the difference comes from the treatment of snow and evapotranspiration. The study reveals that expertise in the role of snow on runoff generation and evapotranspiration in Hydrological and Land Surface could be combined to improve the representation of the fresh water flows in the Arctic in both approaches. Improved observations are essential to make these modelling advances possible.

  19. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  20. Enhanced cardiac perception is associated with increased susceptibility to framing effects.

    PubMed

    Sütterlin, Stefan; Schulz, Stefan M; Stumpf, Theresa; Pauli, Paul; Vögele, Claus

    2013-07-01

    Previous studies suggest in line with dual process models that interoceptive skills affect controlled decisions via automatic or implicit processing. The "framing effect" is considered to capture implicit effects of task-irrelevant emotional stimuli on decision-making. We hypothesized that cardiac awareness, as a measure of interoceptive skills, is positively associated with susceptibility to the framing effect. Forty volunteers performed a risky-choice framing task in which the effect of loss versus gain frames on decisions based on identical information was assessed. The results show a positive association between cardiac awareness and the framing effect, accounting for 24% of the variance in the framing effect. These findings demonstrate that good interoceptive skills are linked to poorer performance in risky choices based on ambivalent information when implicit bias is induced by task-irrelevant emotional information. These findings support a dual process perspective on decision-making and suggest that interoceptive skills mediate effects of implicit bias on decisions. Copyright © 2013 Cognitive Science Society, Inc.

  1. Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency selective processing.

    PubMed

    Jørgensen, Søren; Dau, Torsten

    2011-09-01

    A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The model estimates the speech-to-noise envelope power ratio, SNR(env), at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed that the predicted decrease of intelligibility was caused by the estimated noise envelope power exceeding that of the speech. The classical concept of the speech transmission index fails in this condition. The results strongly suggest that the signal-to-noise ratio at the output of a modulation frequency selective process provides a key measure of speech intelligibility. © 2011 Acoustical Society of America

  2. A modified Galam’s model for word-of-mouth information exchange

    NASA Astrophysics Data System (ADS)

    Ellero, Andrea; Fasano, Giovanni; Sorato, Annamaria

    2009-09-01

    In this paper we analyze the stochastic model proposed by Galam in [S. Galam, Modelling rumors: The no plane Pentagon French hoax case, Physica A 320 (2003), 571-580], for information spreading in a ‘word-of-mouth’ process among agents, based on a majority rule. Using the communications rules among agents defined in the above reference, we first perform simulations of the ‘word-of-mouth’ process and compare the results with the theoretical values predicted by Galam’s model. Some dissimilarities arise in particular when a small number of agents is considered. We find motivations for these dissimilarities and suggest some enhancements by introducing a new parameter dependent model. We propose a modified Galam’s scheme which is asymptotically coincident with the original model in the above reference. Furthermore, for relatively small values of the parameter, we provide a numerical experience proving that the modified model often outperforms the original one.

  3. An empirical and model study on automobile market in Taiwan

    NASA Astrophysics Data System (ADS)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  4. Evaluation of the ORCHIDEE ecosystem model over Africa against 25 years of satellite-based water and carbon measurements

    NASA Astrophysics Data System (ADS)

    Traore, Abdoul Khadre; Ciais, Philippe; Vuichard, Nicolas; Poulter, Benjamin; Viovy, Nicolas; Guimberteau, Matthieu; Jung, Martin; Myneni, Ranga; Fisher, Joshua B.

    2014-08-01

    Few studies have evaluated land surface models for African ecosystems. Here we evaluate the Organizing Carbon and Hydrology in Dynamic Ecosystems (ORCHIDEE) process-based model for the interannual variability (IAV) of the fraction of absorbed active radiation, the gross primary productivity (GPP), soil moisture, and evapotranspiration (ET). Two ORCHIDEE versions are tested, which differ by their soil hydrology parameterization, one with a two-layer simple bucket and the other a more complex 11-layer soil-water diffusion. In addition, we evaluate the sensitivity of climate forcing data, atmospheric CO2, and soil depth. Beside a very generic vegetation parameterization, ORCHIDEE simulates rather well the IAV of GPP and ET (0.5 < r < 0.9 interannual correlation) over Africa except in forestlands. The ORCHIDEE 11-layer version outperforms the two-layer version for simulating IAV of soil moisture, whereas both versions have similar performance of GPP and ET. Effects of CO2 trends, and of variable soil depth on the IAV of GPP, ET, and soil moisture are small, although these drivers influence the trends of these variables. The meteorological forcing data appear to be quite important for faithfully reproducing the IAV of simulated variables, suggesting that in regions with sparse weather station data, the model uncertainty is strongly related to uncertain meteorological forcing. Simulated variables are positively and strongly correlated with precipitation but negatively and weakly correlated with temperature and solar radiation. Model-derived and observation-based sensitivities are in agreement for the driving role of precipitation. However, the modeled GPP is too sensitive to precipitation, suggesting that processes such as increased water use efficiency during drought need to be incorporated in ORCHIDEE.

  5. Efficacy of bedrock erosion by subglacial water flow

    NASA Astrophysics Data System (ADS)

    Beaud, F.; Flowers, G. E.; Venditti, J. G.

    2015-09-01

    Bedrock erosion by sediment-bearing subglacial water remains little-studied, however the process is thought to contribute to bedrock erosion rates in glaciated landscapes and is implicated in the excavation of tunnel valleys and the incision of inner gorges. We adapt physics-based models of fluvial abrasion to the subglacial environment, assembling the first model designed to quantify bedrock erosion caused by transient subglacial water flow. The subglacial drainage model consists of a one-dimensional network of cavities dynamically coupled to one or several Röthlisberger channels (R-channels). The bedrock erosion model is based on the tools and cover effect, whereby particles entrained by the flow impact exposed bedrock. We explore the dependency of glacial meltwater erosion on the structure and magnitude of water input to the system, the ice geometry and the sediment supply. We find that erosion is not a function of water discharge alone, but also depends on channel size, water pressure and on sediment supply, as in fluvial systems. Modelled glacial meltwater erosion rates are one to two orders of magnitude lower than the expected rates of total glacial erosion required to produce the sediment supply rates we impose, suggesting that glacial meltwater erosion is negligible at the basin scale. Nevertheless, due to the extreme localization of glacial meltwater erosion (at the base of R-channels), this process can carve bedrock (Nye) channels. In fact, our simulations suggest that the incision of bedrock channels several centimetres deep and a few meters wide can occur in a single year. Modelled incision rates indicate that subglacial water flow can gradually carve a tunnel valley and enhance the relief or even initiate the carving of an inner gorge.

  6. The development and implementation of the Chronic Care Management Programme in Counties Manukau.

    PubMed

    Wellingham, John; Tracey, Jocelyn; Rea, Harold; Gribben, Barry

    2003-02-21

    To develop an effective and efficient process for the seamless delivery of care for targeted patients with specific chronic diseases. To reduce inexplicable variation and maximise use of available resources by implementing evidence-based care processes. To develop a programme that is acceptable and applicable to the Counties Manukau region. A model for the management of people with chronic diseases was developed. Model components and potential interventions were piloted. For each disease project, a return on investment was calculated and external evaluation was undertaken. The initial model was subsequently modified and individual disease projects aligned to it. The final Chronic Care Management model, agreed in September 2001, described a single common process. Key components were the targeting of high risk patients, organisation of cost effective interventions into a system of care, and an integrated care server acting as a data warehouse with a rules engine, providing flags and reminders. Return on investment analysis suggested potential savings for each disease component from $277 to $980 per person per annum. For selected chronic diseases, introduction of an integrated chronic care management programme, based on internationally accepted best practice processes and interventions can make significant savings, reducing morbidity and improving the efficiency of health delivery in the Counties Manukau region.

  7. The ends of uncertainty: Air quality science and planning in Central California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fine, James

    Air quality planning in Central California is complicated and controversial despite millions of dollars invested to improve scientific understanding. This research describes and critiques the use of photochemical air quality simulation modeling studies in planning to attain standards for ground-level ozone in the San Francisco Bay Area and the San Joaquin Valley during the 1990's. Data are gathered through documents and interviews with planners, modelers, and policy-makers at public agencies and with representatives from the regulated and environmental communities. Interactions amongst organizations are diagramed to identify significant nodes of interaction. Dominant policy coalitions are described through narratives distinguished by theirmore » uses of and responses to uncertainty, their exposures to risks, and their responses to the principles of conservatism, civil duty, and caution. Policy narratives are delineated using aggregated respondent statements to describe and understand advocacy coalitions. I found that models impacted the planning process significantly, but were used not purely for their scientific capabilities. Modeling results provided justification for decisions based on other constraints and political considerations. Uncertainties were utilized opportunistically by stakeholders instead of managed explicitly. Ultimately, the process supported the partisan views of those in control of the modeling. Based on these findings, as well as a review of model uncertainty analysis capabilities, I recommend modifying the planning process to allow for the development and incorporation of uncertainty information, while addressing the need for inclusive and meaningful public participation. By documenting an actual air quality planning process these findings provide insights about the potential for using new scientific information and understanding to achieve environmental goals, most notably the analysis of uncertainties in modeling applications. Concurrently, needed uncertainty information is identified and capabilities to produce it are assessed. Practices to facilitate incorporation of uncertainty information are suggested based on research findings, as well as theory from the literatures of the policy sciences, decision sciences, science and technology studies, consensus-based and communicative planning, and modeling.« less

  8. The UO2 ex-ADU powder preparation and pellet sintering for optimum efficiency: experimental and modeling studies

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Trong; Thuan, Le Ba; Van Tung, Nguyen; Thuy, Nguyen Thanh; Lee, Jin-Young; Jyothi, Rajesh Kumar

    2017-12-01

    The UO2 nuclear fuel pellet process for light water reactors (LWR) includes the conversion of uranium hexafluoride (UF6) into UO2 powder and the fabrication of UO2 pellets from such UO2 powder. In the paper, studies on UO2 pellet process from ammonium diuranate-derived uranium dioxide powder (UO2 ex-ADU powder) were reported. The UO2 ex-ADU powders were converted from ADU at various temperatures of 973 K, 1023 K and 1073 K and then UO2 pellets prepared from the powders were sintered at temperatures of 1923 K, 1973 K and 2023 K for times of 4 h, 6 h and 8 h. Response surface methodology (RSM) based on quadratic central composite design (CCD) type of face centered (CCF) improved by Box and Hunter was used to model the UO2 pellet process, using MODDE 5.0 software as an assessing tool. On the base of the proposed model, the relationship between the technological parameters and density of the UO2 pellet product was suggested to control the UO2 ex-ADU pellet process as desired levels.

  9. A dual-process model of reactions to perceived stigma.

    PubMed

    Pryor, John B; Reeder, Glenn D; Yeadon, Christopher; Hesson-McLnnis, Matthew

    2004-10-01

    The authors propose a theoretical model of individual psychological reactions to perceived stigma. This model suggests that 2 psychological systems may be involved in reactions to stigma across a variety of social contexts. One system is primarily reflexive, or associative, whereas the other is rule based, or reflective. This model assumes a temporal pattern of reactions to the stigmatized, such that initial reactions are governed by the reflexive system, whereas subsequent reactions or "adjustments" are governed by the rule-based system. Support for this model was found in 2 studies. Both studies examined participants' moment-by-moment approach-avoidance reactions to the stigmatized. The 1st involved participants' reactions to persons with HIV/AIDS, and the 2nd, participants' reactions to 15 different stigmatizing conditions. (c) 2004 APA, all rights reserved

  10. Computer Aided Diagnostic Support System for Skin Cancer: A Review of Techniques and Algorithms

    PubMed Central

    Masood, Ammara; Al-Jumaily, Adel Ali

    2013-01-01

    Image-based computer aided diagnosis systems have significant potential for screening and early detection of malignant melanoma. We review the state of the art in these systems and examine current practices, problems, and prospects of image acquisition, pre-processing, segmentation, feature extraction and selection, and classification of dermoscopic images. This paper reports statistics and results from the most important implementations reported to date. We compared the performance of several classifiers specifically developed for skin lesion diagnosis and discussed the corresponding findings. Whenever available, indication of various conditions that affect the technique's performance is reported. We suggest a framework for comparative assessment of skin cancer diagnostic models and review the results based on these models. The deficiencies in some of the existing studies are highlighted and suggestions for future research are provided. PMID:24575126

  11. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach

    PubMed Central

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10–150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes’ reported grapheme-color association. A mathematical model, based on Bundesen’s (1990) Theory of Visual Attention (TVA), was fitted to each observer’s data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group’s model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes’ expertise regarding their specific grapheme-color associations. PMID:26252019

  12. Strengthening organizations to implement evidence-based clinical practices.

    PubMed

    VanDeusen Lukas, Carol; Engle, Ryann L; Holmes, Sally K; Parker, Victoria A; Petzel, Robert A; Nealon Seibert, Marjorie; Shwartz, Michael; Sullivan, Jennifer L

    2010-01-01

    Despite recognition that implementation of evidence-based clinical practices (EBPs) usually depends on the structure and processes of the larger health care organizational context, the dynamics of implementation are not well understood. This project's aim was to deepen that understanding by implementing and evaluating an organizational model hypothesized to strengthen the ability of health care organizations to facilitate EBPs. CONCEPTUAL MODEL: The model posits that implementation of EBPs will be enhanced through the presence of three interacting components: active leadership commitment to quality, robust clinical process redesign incorporating EBPs into routine operations, and use of management structures and processes to support and align redesign. In a mixed-methods longitudinal comparative case study design, seven medical centers in one network in the Department of Veterans Affairs participated in an intervention to implement the organizational model over 3 years. The network was selected randomly from three interested in using the model. The target EBP was hand-hygiene compliance. Measures included ratings of implementation fidelity, observed hand-hygiene compliance, and factors affecting model implementation drawn from interviews. Analyses support the hypothesis that greater fidelity to the organizational model was associated with higher compliance with hand-hygiene guidelines. High-fidelity sites showed larger effect sizes for improvement in hand-hygiene compliance than lower-fidelity sites. Adherence to the organizational model was in turn affected by factors in three categories: urgency to improve, organizational environment, and improvement climate. Implementation of EBPs, particularly those that cut across multiple processes of care, is a complex process with many possibilities for failure. The results provide the basis for a refined understanding of relationships among components of the organizational model and factors in the organizational context affecting them. This understanding suggests practical lessons for future implementation efforts and contributes to theoretical understanding of the dynamics of the implementation of EBPs.

  13. Influence of prior information on pain involves biased perceptual decision-making.

    PubMed

    Wiech, Katja; Vandekerckhove, Joachim; Zaman, Jonas; Tuerlinckx, Francis; Vlaeyen, Johan W S; Tracey, Irene

    2014-08-04

    Prior information about features of a stimulus is a strong modulator of perception. For instance, the prospect of more intense pain leads to an increased perception of pain, whereas the expectation of analgesia reduces pain, as shown in placebo analgesia and expectancy modulations during drug administration. This influence is commonly assumed to be rooted in altered sensory processing and expectancy-related modulations in the spinal cord, are often taken as evidence for this notion. Contemporary models of perception, however, suggest that prior information can also modulate perception by biasing perceptual decision-making - the inferential process underlying perception in which prior information is used to interpret sensory information. In this type of bias, the information is already present in the system before the stimulus is observed. Computational models can distinguish between changes in sensory processing and altered decision-making as they result in different response times for incorrect choices in a perceptual decision-making task (Figure S1A,B). Using a drift-diffusion model, we investigated the influence of both processes in two independent experiments. The results of both experiments strongly suggest that these changes in pain perception are predominantly based on altered perceptual decision-making. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Addressing the translational dilemma: dynamic knowledge representation of inflammation using agent-based modeling.

    PubMed

    An, Gary; Christley, Scott

    2012-01-01

    Given the panoply of system-level diseases that result from disordered inflammation, such as sepsis, atherosclerosis, cancer, and autoimmune disorders, understanding and characterizing the inflammatory response is a key target of biomedical research. Untangling the complex behavioral configurations associated with a process as ubiquitous as inflammation represents a prototype of the translational dilemma: the ability to translate mechanistic knowledge into effective therapeutics. A critical failure point in the current research environment is a throughput bottleneck at the level of evaluating hypotheses of mechanistic causality; these hypotheses represent the key step toward the application of knowledge for therapy development and design. Addressing the translational dilemma will require utilizing the ever-increasing power of computers and computational modeling to increase the efficiency of the scientific method in the identification and evaluation of hypotheses of mechanistic causality. More specifically, development needs to focus on facilitating the ability of non-computer trained biomedical researchers to utilize and instantiate their knowledge in dynamic computational models. This is termed "dynamic knowledge representation." Agent-based modeling is an object-oriented, discrete-event, rule-based simulation method that is well suited for biomedical dynamic knowledge representation. Agent-based modeling has been used in the study of inflammation at multiple scales. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggest that this modeling framework is well suited for addressing the translational dilemma. This review describes agent-based modeling, gives examples of its applications in the study of inflammation, and introduces a proposed general expansion of the use of modeling and simulation to augment the generation and evaluation of knowledge by the biomedical research community at large.

  15. HiTEC: a connectionist model of the interaction between perception and action planning.

    PubMed

    Haazebroek, Pascal; Raffone, Antonino; Hommel, Bernhard

    2017-11-01

    Increasing evidence suggests that perception and action planning do not represent separable stages of a unidirectional processing sequence, but rather emerging properties of highly interactive processes. To capture these characteristics of the human cognitive system, we have developed a connectionist model of the interaction between perception and action planning: HiTEC, based on the Theory of Event Coding (Hommel et al. in Behav Brain Sci 24:849-937, 2001). The model is characterized by representations at multiple levels and by shared representations and processes. It complements available models of stimulus-response translation by providing a rationale for (1) how situation-specific meanings of motor actions emerge, (2) how and why some aspects of stimulus-response translation occur automatically and (3) how task demands modulate sensorimotor processing. The model is demonstrated to provide a unitary account and simulation of a number of key findings with multiple experimental paradigms on the interaction between perception and action such as the Simon effect, its inversion (Hommel in Psychol Res 55:270-279, 1993), and action-effect learning.

  16. Neural pathways in processing of sexual arousal: a dynamic causal modeling study.

    PubMed

    Seok, J-W; Park, M-S; Sohn, J-H

    2016-09-01

    Three decades of research have investigated brain processing of visual sexual stimuli with neuroimaging methods. These researchers have found that sexual arousal stimuli elicit activity in a broad neural network of cortical and subcortical brain areas that are known to be associated with cognitive, emotional, motivational and physiological components. However, it is not completely understood how these neural systems integrate and modulated incoming information. Therefore, we identify cerebral areas whose activations were correlated with sexual arousal using event-related functional magnetic resonance imaging and used the dynamic causal modeling method for searching the effective connectivity about the sexual arousal processing network. Thirteen heterosexual males were scanned while they passively viewed alternating short trials of erotic and neutral pictures on a monitor. We created a subset of seven models based on our results and previous studies and selected a dominant connectivity model. Consequently, we suggest a dynamic causal model of the brain processes mediating the cognitive, emotional, motivational and physiological factors of human male sexual arousal. These findings are significant implications for the neuropsychology of male sexuality.

  17. Estimation of biogas and methane yields in an UASB treating potato starch processing wastewater with backpropagation artificial neural network.

    PubMed

    Antwi, Philip; Li, Jianzheng; Boadi, Portia Opoku; Meng, Jia; Shi, En; Deng, Kaiwen; Bondinuba, Francis Kwesi

    2017-03-01

    Three-layered feedforward backpropagation (BP) artificial neural networks (ANN) and multiple nonlinear regression (MnLR) models were developed to estimate biogas and methane yield in an upflow anaerobic sludge blanket (UASB) reactor treating potato starch processing wastewater (PSPW). Anaerobic process parameters were optimized to identify their importance on methanation. pH, total chemical oxygen demand, ammonium, alkalinity, total Kjeldahl nitrogen, total phosphorus, volatile fatty acids and hydraulic retention time selected based on principal component analysis were used as input variables, whiles biogas and methane yield were employed as target variables. Quasi-Newton method and conjugate gradient backpropagation algorithms were best among eleven training algorithms. Coefficient of determination (R 2 ) of the BP-ANN reached 98.72% and 97.93% whiles MnLR model attained 93.9% and 91.08% for biogas and methane yield, respectively. Compared with the MnLR model, BP-ANN model demonstrated significant performance, suggesting possible control of the anaerobic digestion process with the BP-ANN model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A Model of Compound Heterozygous, Loss-of-Function Alleles Is Broadly Consistent with Observations from Complex-Disease GWAS Datasets

    PubMed Central

    Sanjak, Jaleal S.; Long, Anthony D.; Thornton, Kevin R.

    2017-01-01

    The genetic component of complex disease risk in humans remains largely unexplained. A corollary is that the allelic spectrum of genetic variants contributing to complex disease risk is unknown. Theoretical models that relate population genetic processes to the maintenance of genetic variation for quantitative traits may suggest profitable avenues for future experimental design. Here we use forward simulation to model a genomic region evolving under a balance between recurrent deleterious mutation and Gaussian stabilizing selection. We consider multiple genetic and demographic models, and several different methods for identifying genomic regions harboring variants associated with complex disease risk. We demonstrate that the model of gene action, relating genotype to phenotype, has a qualitative effect on several relevant aspects of the population genetic architecture of a complex trait. In particular, the genetic model impacts genetic variance component partitioning across the allele frequency spectrum and the power of statistical tests. Models with partial recessivity closely match the minor allele frequency distribution of significant hits from empirical genome-wide association studies without requiring homozygous effect sizes to be small. We highlight a particular gene-based model of incomplete recessivity that is appealing from first principles. Under that model, deleterious mutations in a genomic region partially fail to complement one another. This model of gene-based recessivity predicts the empirically observed inconsistency between twin and SNP based estimated of dominance heritability. Furthermore, this model predicts considerable levels of unexplained variance associated with intralocus epistasis. Our results suggest a need for improved statistical tools for region based genetic association and heritability estimation. PMID:28103232

  19. A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations

    PubMed Central

    Miller, Scot M.; Miller, Charles E.; Commane, Roisin; Chang, Rachel Y.-W.; Dinardo, Steven J.; Henderson, John M.; Karion, Anna; Lindaas, Jakob; Melton, Joe R.; Miller, John B.; Sweeney, Colm; Wofsy, Steven C.; Michalak, Anna M.

    2016-01-01

    Methane (CH4) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH4 fluxes across Alaska for 2012–2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH4 observations. In addition, we find that CH4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH4 (for May–Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH4 fluxes than in process estimates. Lastly, we find that the seasonality of CH4 fluxes varied during 2012–2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH4 fluxes from the state. PMID:28066129

  20. A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations.

    PubMed

    Miller, Scot M; Miller, Charles E; Commane, Roisin; Chang, Rachel Y-W; Dinardo, Steven J; Henderson, John M; Karion, Anna; Lindaas, Jakob; Melton, Joe R; Miller, John B; Sweeney, Colm; Wofsy, Steven C; Michalak, Anna M

    2016-10-01

    Methane (CH 4 ) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH 4 fluxes across Alaska for 2012-2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH 4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH 4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH 4 observations. In addition, we find that CH 4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH 4 ( for May-Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH 4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH 4 fluxes than in process estimates. Lastly, we find that the seasonality of CH 4 fluxes varied during 2012-2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH 4 fluxes from the state.

  1. Processes of change in a school-based mindfulness programme: cognitive reactivity and self-coldness as mediators.

    PubMed

    Van der Gucht, Katleen; Takano, Keisuke; Raes, Filip; Kuppens, Peter

    2018-05-01

    The underlying mechanisms of the effectiveness of mindfulness-based interventions for emotional well-being remain poorly understood. Here, we examined the potential mediating effects of cognitive reactivity and self-compassion on symptoms of depression, anxiety and stress using data from an earlier randomised controlled school trial. A moderated time-lagged mediation model based on multilevel modelling was used to analyse the data. The findings showed that post-treatment changes in cognitive reactivity and self-coldness, an aspect of self-compassion, mediated subsequent changes in symptoms of depression, anxiety and stress. These results suggest that cognitive reactivity and self-coldness may be considered as transdiagnostic mechanisms of change of a mindfulness-based intervention programme for youth.

  2. F-centers mechanism of long-term relaxation in lead zirconate-titanate based piezoelectric ceramics. 2. After-field relaxation

    NASA Astrophysics Data System (ADS)

    Ishchuk, V. M.; Kuzenko, D. V.

    2016-08-01

    The paper presents results of experimental study of the dielectric constant relaxation during aging process in Pb(Zr,Ti)O3 based solid solutions (PZT) after action of external DC electric field. The said process is a long-term one and is described by the logarithmic function of time. Reversible and nonreversible relaxation process takes place depending on the field intensity. The relaxation rate depends on the field strength also, and the said dependence has nonlinear and nonmonotonic form, if external field leads to domain disordering. The oxygen vacancies-based model for description of the long-term relaxation processes is suggested. The model takes into account the oxygen vacancies on the sample's surface ends, their conversion into F+- and F0-centers under external effects and subsequent relaxation of these centers into the simple oxygen vacancies after the action termination. F-centers formation leads to the violation of the original sample's electroneutrality, and generate intrinsic DC electric field into the sample. Relaxation of F-centers is accompanied by the reduction of the electric field, induced by them, and relaxation of the dielectric constant, as consequent effect.

  3. Evaluation of the whole body physiologically based pharmacokinetic (WB-PBPK) modeling of drugs.

    PubMed

    Munir, Anum; Azam, Shumaila; Fazal, Sahar; Bhatti, A I

    2018-08-14

    The Physiologically based pharmacokinetic (PBPK) modeling is a supporting tool in drug discovery and improvement. Simulations produced by these models help to save time and aids in examining the effects of different variables on the pharmacokinetics of drugs. For this purpose, Sheila and Peters suggested a PBPK model capable of performing simulations to study a given drug absorption. There is a need to extend this model to the whole body entailing all another process like distribution, metabolism, and elimination, besides absorption. The aim of this scientific study is to hypothesize a WB-PBPK model through integrating absorption, distribution, metabolism, and elimination processes with the existing PBPK model.Absorption, distribution, metabolism, and elimination models are designed, integrated with PBPK model and validated. For validation purposes, clinical records of few drugs are collected from the literature. The developed WB-PBPK model is affirmed by comparing the simulations produced by the model against the searched clinical data. . It is proposed that the WB-PBPK model may be used in pharmaceutical industries to create of the pharmacokinetic profiles of drug candidates for better outcomes, as it is advance PBPK model and creates comprehensive PK profiles for drug ADME in concentration-time plots. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Dream interpretation, affect, and the theory of neuronal group selection: Freud, Winnicott, Bion, and Modell.

    PubMed

    Shields, Walker

    2006-12-01

    The author uses a dream specimen as interpreted during psychoanalysis to illustrate Modell's hypothesis that Edelman's theory of neuronal group selection (TNGS) may provide a valuable neurobiological model for Freud's dynamic unconscious, imaginative processes in the mind, the retranscription of memory in psychoanalysis, and intersubjective processes in the analytic relationship. He draws parallels between the interpretation of the dream material with keen attention to affect-laden meanings in the evolving analytic relationship in the domain of psychoanalysis and the principles of Edelman's TNGS in the domain of neurobiology. The author notes how this correlation may underscore the importance of dream interpretation in psychoanalysis. He also suggests areas for further investigation in both realms based on study of their interplay.

  5. Microbes as engines of ecosystem function: When does community structure enhance predictions of ecosystem processes?

    DOE PAGES

    Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; ...

    2016-02-24

    In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less

  6. Microbes as engines of ecosystem function: When does community structure enhance predictions of ecosystem processes?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas

    In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less

  7. Feedback and feedforward adaptation to visuomotor delay during reaching and slicing movements.

    PubMed

    Botzer, Lior; Karniel, Amir

    2013-07-01

    It has been suggested that the brain and in particular the cerebellum and motor cortex adapt to represent the environment during reaching movements under various visuomotor perturbations. It is well known that significant delay is present in neural conductance and processing; however, the possible representation of delay and adaptation to delayed visual feedback has been largely overlooked. Here we investigated the control of reaching movements in human subjects during an imposed visuomotor delay in a virtual reality environment. In the first experiment, when visual feedback was unexpectedly delayed, the hand movement overshot the end-point target, indicating a vision-based feedback control. Over the ensuing trials, movements gradually adapted and became accurate. When the delay was removed unexpectedly, movements systematically undershot the target, demonstrating that adaptation occurred within the vision-based feedback control mechanism. In a second experiment designed to broaden our understanding of the underlying mechanisms, we revealed similar after-effects for rhythmic reversal (out-and-back) movements. We present a computational model accounting for these results based on two adapted forward models, each tuned for a specific modality delay (proprioception or vision), and a third feedforward controller. The computational model, along with the experimental results, refutes delay representation in a pure forward vision-based predictor and suggests that adaptation occurred in the forward vision-based predictor, and concurrently in the state-based feedforward controller. Understanding how the brain compensates for conductance and processing delays is essential for understanding certain impairments concerning these neural delays as well as for the development of brain-machine interfaces. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  8. Symbolic healing of early psychosis: psychoeducation and sociocultural processes of recovery.

    PubMed

    Larsen, John Aggergaard

    2007-09-01

    This article analyzes sociocultural processes of recovery in a Danish mental health service providing two years of integrated biopsychosocial treatment following first-episode psychosis. The study is based on ethnographic research in the service and person-centered involvement with 15 clients. The analysis applies Dow's [1986 American Anthropologist 88:56-69] model of universal components of symbolic healing to elucidate sociocultural aspects of therapeutic efficacy that are alternatively disregarded as placebo or nonspecific effects. It is demonstrated how staff engaged with clients to deliver "psychoeducation" that provided scientific and biomedical theories about mental illness, constituting a shared "mythic world" that was accepted as an experiential truth and used to explain clients' illness experiences. The analysis highlights the need to supplement attention in Dow's model to the healing procedure with consideration of variability in the healing process. Depending on individual responses to the intervention, the staff's professional backgrounds and staff-client relationships different recovery models were applied. One suggested "episodic psychosis" and full recovery, and the other suggested "chronic schizophrenia" and the necessity of comprehensive life adjustments to the mental illness. The recovery models influenced clients' perspectives on illness and self as they engaged in identity work, negotiating future plans and individual life projects by including also alternative systems of explanation from the wider cultural repertoire.

  9. Decomposing decision components in the Stop-signal task: A model-based approach to individual differences in inhibitory control

    PubMed Central

    White, Corey N.; Congdon, Eliza; Mumford, Jeanette A.; Karlsgodt, Katherine H.; Sabb, Fred W.; Freimer, Nelson B.; London, Edythe D.; Cannon, Tyrone D.; Bilder, Robert M.; Poldrack, Russell A.

    2014-01-01

    The Stop-signal task (SST), in which participants must inhibit prepotent responses, has been used to identify neural systems that vary with individual differences in inhibitory control. To explore how these differences relate to other aspects of decision-making, a drift diffusion model of simple decisions was fitted to SST data from Go trials to extract measures of caution, motor execution time, and stimulus processing speed for each of 123 participants. These values were used to probe fMRI data to explore individual differences in neural activation. Faster processing of the Go stimulus correlated with greater activation in the right frontal pole for both Go and Stop trials. On Stop trials stimulus processing speed also correlated with regions implicated in inhibitory control, including the right inferior frontal gyrus, medial frontal gyrus, and basal ganglia. Individual differences in motor execution time correlated with activation of the right parietal cortex. These findings suggest a robust relationship between the speed of stimulus processing and inhibitory processing at the neural level. This model-based approach provides novel insight into the interrelationships among decision components involved in inhibitory control, and raises interesting questions about strategic adjustments in performance and inhibitory deficits associated with psychopathology. PMID:24405185

  10. An Institutional Mechanism for Assortment in an Ecology of Games

    PubMed Central

    Smaldino, Paul E.; Lubell, Mark

    2011-01-01

    Recent research has revived Long's “ecology of games” model to analyze how social actors cooperate in the context of multiple political and social games. However, there is still a paucity of theoretical work that considers the mechanisms by which large-scale cooperation can be promoted in a dynamic institutional landscape, in which actors can join new games and leave old ones. This paper develops an agent-based model of an ecology of games where agents participate in multiple public goods games. In addition to contribution decisions, the agents can leave and join different games, and these processes are de-coupled. We show that the payoff for cooperation is greater than for defection when limits to the number of actors per game (“capacity constraints”) structure the population in ways that allow cooperators to cluster, independent of any complex individual-level mechanisms such as reputation or punishment. Our model suggests that capacity constraints are one effective mechanism for producing positive assortment and increasing cooperation in an ecology of games. The results suggest an important trade-off between the inclusiveness of policy processes and cooperation: Fully inclusive policy processes reduce the chances of cooperation. PMID:21850249

  11. Neuroscientific Model of Motivational Process

    PubMed Central

    Kim, Sung-il

    2013-01-01

    Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment. PMID:23459598

  12. Neuroscientific model of motivational process.

    PubMed

    Kim, Sung-Il

    2013-01-01

    Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment.

  13. The Living Cell as a Multi-agent Organisation: A Compositional Organisation Model of Intracellular Dynamics

    NASA Astrophysics Data System (ADS)

    Jonker, C. M.; Snoep, J. L.; Treur, J.; Westerhoff, H. V.; Wijngaards, W. C. A.

    Within the areas of Computational Organisation Theory and Artificial Intelligence, techniques have been developed to simulate and analyse dynamics within organisations in society. Usually these modelling techniques are applied to factories and to the internal organisation of their process flows, thus obtaining models of complex organisations at various levels of aggregation. The dynamics in living cells are often interpreted in terms of well-organised processes, a bacterium being considered a (micro)factory. This suggests that organisation modelling techniques may also benefit their analysis. Using the example of Escherichia coli it is shown how indeed agent-based organisational modelling techniques can be used to simulate and analyse E.coli's intracellular dynamics. Exploiting the abstraction levels entailed by this perspective, a concise model is obtained that is readily simulated and analysed at the various levels of aggregation, yet shows the cell's essential dynamic patterns.

  14. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  15. Relapse Model among Iranian Drug Users: A Qualitative Study.

    PubMed

    Jalali, Amir; Seyedfatemi, Naiemeh; Peyrovi, Hamid

    2015-01-01

    Relapse is a common problem in drug user's rehabilitation program and reported in all over the country. An in-depth study on patients' experiences can be used for exploring the relapse process among drug users. Therefore, this study suggests a model for relapse process among Iranian drug users. In this qualitative study with grounded theory approach, 22 participants with rich information about the phenomenon under the study were selected using purposive, snowball and theoretical sampling methods. After obtaining the informed consent, data were collected based on face-to-face, in-depth, semi-structured interviews. All interviews were analyzed in three stages of axial, selective and open coding methods. Nine main categories emerged, including avoiding of drugs, concerns about being accepted, family atmosphere, social conditions, mental challenge, self-management, self-deception, use and remorse and a main category, feeling of loss as the core variable. Mental challenge has two subcategories, evoking pleasure and craving. Relapse model is a dynamic and systematic process including from cycles of drug avoidance to remorse with a core variable as feeling of loss.  Relapse process is a dynamic and systematic process that needs an effective control. Determining a relapse model as a clear process could be helpful in clinical sessions. RESULTS of this research have depicted relapse process among Iranian drugs user by conceptual model.

  16. Computational data sciences for assessment and prediction of climate extremes

    NASA Astrophysics Data System (ADS)

    Ganguly, A. R.

    2011-12-01

    Climate extremes may be defined inclusively as severe weather events or large shifts in global or regional weather patterns which may be caused or exacerbated by natural climate variability or climate change. This area of research arguably represents one of the largest knowledge-gaps in climate science which is relevant for informing resource managers and policy makers. While physics-based climate models are essential in view of non-stationary and nonlinear dynamical processes, their current pace of uncertainty reduction may not be adequate for urgent stakeholder needs. The structure of the models may in some cases preclude reduction of uncertainty for critical processes at scales or for the extremes of interest. On the other hand, methods based on complex networks, extreme value statistics, machine learning, and space-time data mining, have demonstrated significant promise to improve scientific understanding and generate enhanced predictions. When combined with conceptual process understanding at multiple spatiotemporal scales and designed to handle massive data, interdisciplinary data science methods and algorithms may complement or supplement physics-based models. Specific examples from the prior literature and our ongoing work suggests how data-guided improvements may be possible, for example, in the context of ocean meteorology, climate oscillators, teleconnections, and atmospheric process understanding, which in turn can improve projections of regional climate, precipitation extremes and tropical cyclones in an useful and interpretable fashion. A community-wide effort is motivated to develop and adapt computational data science tools for translating climate model simulations to information relevant for adaptation and policy, as well as for improving our scientific understanding of climate extremes from both observed and model-simulated data.

  17. Mathematical modeling of the MHD stability dependence on the interpole distance in the multianode aluminium electrolyser

    NASA Astrophysics Data System (ADS)

    Kuzmin, R. N.; Savenkova, N. P.; Shobukhov, A. V.; Kalmykov, A. V.

    2018-03-01

    The paper deals with investigation of the MHD-stability dependence on the depth of the anode immersion in the process of aluminium electrolysis. The proposed 3D three-phase mathematical model is based on the Navier-Stokes and Maxwell equation systems. This model makes it possible to simulate the distributions of the main physical fields both in horizontal and vertical planes. The suggested approach also allows to study the dynamics of the border between aluminium and electrolyte and the shape of the back oxidation zone.

  18. Mathematical modeling of the fermentation of acid-hydrolyzed pyrolytic sugars to ethanol by the engineered strain Escherichia coli ACCC 11177.

    PubMed

    Chang, Dongdong; Yu, Zhisheng; Islam, Zia Ul; Zhang, Hongxun

    2015-05-01

    Pyrolysate from waste cotton was acid hydrolyzed and detoxified to yield pyrolytic sugars, which were fermented to ethanol by the strain Escherichia coli ACCC 11177. Mathematical models based on the fermentation data were also constructed. Pyrolysate containing an initial levoglucosan concentration of 146.34 g/L gave a glucose yield of 150 % after hydrolysis, suggesting that other compounds were hydrolyzed to glucose as well. Ethyl acetate-based extraction of bacterial growth inhibitors with an ethyl acetate/hydrolysate ratio of 1:0.5 enabled hydrolysate fermentation by E. coli ACCC 11177, without a standard absorption treatment. Batch processing in a fermenter exhibited a maximum ethanol yield and productivity of 0.41 g/g and 0.93 g/L·h(-1), respectively. The cell growth rate (r x ) was consistent with a logistic equation [Formula: see text], which was determined as a function of cell growth (X). Glucose consumption rate (r s ) and ethanol formation rate (r p ) were accurately validated by the equations [Formula: see text] and [Formula: see text], respectively. Together, our results suggest that combining mathematical models with fermenter fermentation processes can enable optimized ethanol production from cellulosic pyrolysate with E. coli. Similar approaches may facilitate the production of other commercially important organic substances.

  19. Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance

    PubMed Central

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Kawato, Mitsuo; Lau, Hakwan

    2016-01-01

    A central controversy in metacognition studies concerns whether subjective confidence directly reflects the reliability of perceptual or cognitive processes, as suggested by normative models based on the assumption that neural computations are generally optimal. This view enjoys popularity in the computational and animal literatures, but it has also been suggested that confidence may depend on a late-stage estimation dissociable from perceptual processes. Yet, at least in humans, experimental tools have lacked the power to resolve these issues convincingly. Here, we overcome this difficulty by using the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network. Here we report that bi-directional changes in confidence do not affect perceptual accuracy. Further psychophysical analyses rule out accounts based on simple shifts in reporting strategy. Our results provide clear neuroscientific evidence for the systematic dissociation between confidence and perceptual performance, and thereby challenge current theoretical thinking. PMID:27976739

  20. A multi-site cognitive task analysis for biomedical query mediation.

    PubMed

    Hruby, Gregory W; Rasmussen, Luke V; Hanauer, David; Patel, Vimla L; Cimino, James J; Weng, Chunhua

    2016-09-01

    To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: "Identify potential index phenotype," "If needed, request EHR database access rights," and "Perform query and present output to medical researcher", and 8 are invalid. We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. A Multi-Site Cognitive Task Analysis for Biomedical Query Mediation

    PubMed Central

    Hruby, Gregory W.; Rasmussen, Luke V.; Hanauer, David; Patel, Vimla; Cimino, James J.; Weng, Chunhua

    2016-01-01

    Objective To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. Materials and Methods We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. Results The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: “Identify potential index phenotype,” “If needed, request EHR database access rights,” and “Perform query and present output to medical researcher”, and 8 are invalid. Discussion We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. Conclusions We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. PMID:27435950

  2. The effect of the use of android-based application in learning together to improve students' academic performance

    NASA Astrophysics Data System (ADS)

    Ulfa, Andi Maria; Sugiyarto, Kristian H.; Ikhsan, Jaslin

    2017-05-01

    Poor achievement of students' performance on Chemistry may result from unfavourable learning processes. Therefore, innovation on learning process must be created. Regarding fast development of mobile technology, learning process cannot ignore the crucial role of the technology. This research and development (R&D) studies was done to develop android based application and to study the effect of its integration in Learning together (LT) into the improvement of students' learning creativity and cognitive achievement. The development of the application was carried out by adapting Borg & Gall and Dick & Carey model. The developed-product was reviewed by chemist, learning media practitioners, peer reviewers, and educators. After the revision based on the reviews, the application was used in the LT model on the topic of Stoichiometry in a senior high school. The instruments were questionnaires to get comments and suggestion from the reviewers about the application, and the another questionnaire was to collect the data of learning creativity. Another instrument used was a set of test by which data of students' achievement was collected. The results showed that the use of the mobile based application on Learning Together can bring about significant improvement of students' performance including creativity and cognitive achievement.

  3. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    PubMed

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Effect of cognitive load on working memory forgetting in aging.

    PubMed

    Baumans, Christine; Adam, Stephane; Seron, Xavier

    2012-01-01

    Functional approaches to working memory (WM) have been proposed recently to better investigate "maintenance" and "processing" mechanisms. The cognitive load (CL) hypothesis presented in the "Time-Based Resource-Sharing" model (Barrouillet & Camos, 2007) suggests that forgetting from WM (maintenance) can be investigated by varying the presentation rate and processing speed (processing). In this study, young and elderly participants were compared on WM tasks in which the difference in processing speed was controlled by CL manipulations. Two main results were found. First, when time constraints (CL) were matched for the two groups, no aging effect was observed. Second, whereas a large variation in CL affected WM performance, a small CL manipulation had no effect on the elderly. This suggests that WM forgetting cannot be completely accounted for by the CL hypothesis. Rather, it highlights the need to explore restoration times in particular, and the nature of the refreshment mechanisms within maintenance.

  5. Process simulation and dynamic control for marine oily wastewater treatment using UV irradiation.

    PubMed

    Jing, Liang; Chen, Bing; Zhang, Baiyu; Li, Pu

    2015-09-15

    UV irradiation and advanced oxidation processes have been recently regarded as promising solutions in removing polycyclic aromatic hydrocarbons (PAHs) from marine oily wastewater. However, such treatment methods are generally not sufficiently understood in terms of reaction mechanisms, process simulation and process control. These deficiencies can drastically hinder their application in shipping and offshore petroleum industries which produce bilge/ballast water and produced water as the main streams of marine oily wastewater. In this study, the factorial design of experiment was carried out to investigate the degradation mechanism of a typical PAH, namely naphthalene, under UV irradiation in seawater. Based on the experimental results, a three-layer feed-forward artificial neural network simulation model was developed to simulate the treatment process and to forecast the removal performance. A simulation-based dynamic mixed integer nonlinear programming (SDMINP) approach was then proposed to intelligently control the treatment process by integrating the developed simulation model, genetic algorithm and multi-stage programming. The applicability and effectiveness of the developed approach were further tested though a case study. The experimental results showed that the influences of fluence rate and temperature on the removal of naphthalene were greater than those of salinity and initial concentration. The developed simulation model could well predict the UV-induced removal process under varying conditions. The case study suggested that the SDMINP approach, with the aid of the multi-stage control strategy, was able to significantly reduce treatment cost when comparing to the traditional single-stage process optimization. The developed approach and its concept/framework have high potential of applicability in other environmental fields where a treatment process is involved and experimentation and modeling are used for process simulation and control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Aging and the neuroeconomics of decision making: A review.

    PubMed

    Brown, Stephen B R E; Ridderinkhof, K Richard

    2009-12-01

    Neuroeconomics refers to a combination of paradigms derived from neuroscience, psychology, and economics for the study of decision making and is an area that has received considerable scientific attention in the recent literature. Using realistic laboratory tasks, researchers seek to study the neurocognitive processes underlying economic decision making and outcome-based decision learning, as well as individual differences in these processes and the social and affective factors that modulate them. To this point, one question has remained largely unanswered: What happens to decision-making processes and their neural substrates during aging? After all, aging is associated with neurocognitive change, which may affect outcome-based decision making. In our study, we use the subjective expected utility model-a well-established decision-making model in economics-as a descriptive framework. After a short survey of the brain areas and neurotransmitter systems associated with outcome-based decision making-and of the effects of aging thereon-we review a number of decision-making studies. Their general data pattern indicates that the decision-making process is changed by age: The elderly perform less efficiently than younger participants, as demonstrated, for instance, by the smaller total rewards that the elderly acquire in lab tasks. These findings are accounted for in terms of age-related deficiencies in the probability and value parameters of the subjective expected utility model. Finally, we discuss some implications and suggestions for future research.

  7. Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar

    NASA Astrophysics Data System (ADS)

    Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan

    2016-09-01

    A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.

  8. A neural network model for transference and repetition compulsion based on pattern completion.

    PubMed

    Javanbakht, Arash; Ragan, Charles L

    2008-01-01

    In recent years because of the fascinating growth of the body of neuroscientific knowledge, psychoanalytic scientists have worked on models for the neurological substrates of key psychoanalytic concepts. Transference is an important example. In this article, the psychological process of transference is described, employing the neurological function of pattern completion in hippocampal and thalamo-cortical pathways. Similarly, repetition compulsion is seen as another type of such neurological function; however, it is understood as an attempt for mastery of the unknown, rather than simply for mastery of past experiences and perceptions. Based on this suggested model of neurological function, the myth of the psychoanalyst as blank screen is seen as impossible and ineffective, based on neurofunctional understandings of neuropsychological process. The mutative effect of psychoanalytic therapy, correcting patterns of pathological relatedness, is described briefly from conscious and unconscious perspectives. While cognitive understanding (insight) helps to modify transferentially restored, maladaptive patterns of relatedness, the development of more adaptive patterns is also contingent upon an affective experience (working through), which alters the neurological substrates of unconscious, pathological affective patterns and their neurological functional correlates.

  9. Research on strategy marine noise map based on i4ocean platform: Constructing flow and key approach

    NASA Astrophysics Data System (ADS)

    Huang, Baoxiang; Chen, Ge; Han, Yong

    2016-02-01

    Noise level in a marine environment has raised extensive concern in the scientific community. The research is carried out on i4Ocean platform following the process of ocean noise model integrating, noise data extracting, processing, visualizing, and interpreting, ocean noise map constructing and publishing. For the convenience of numerical computation, based on the characteristics of ocean noise field, a hybrid model related to spatial locations is suggested in the propagation model. The normal mode method K/I model is used for far field and ray method CANARY model is used for near field. Visualizing marine ambient noise data is critical to understanding and predicting marine noise for relevant decision making. Marine noise map can be constructed on virtual ocean scene. The systematic marine noise visualization framework includes preprocessing, coordinate transformation interpolation, and rendering. The simulation of ocean noise depends on realistic surface. Then the dynamic water simulation gird was improved with GPU fusion to achieve seamless combination with the visualization result of ocean noise. At the same time, the profile and spherical visualization include space, and time dimensionality were also provided for the vertical field characteristics of ocean ambient noise. Finally, marine noise map can be published with grid pre-processing and multistage cache technology to better serve the public.

  10. BRAIN MYELINATION IN PREVALENT NEUROPSYCHIATRIC DEVELOPMENTAL DISORDERS

    PubMed Central

    BARTZOKIS, GEORGE

    2008-01-01

    Current concepts of addiction focus on neuronal neurocircuitry and neurotransmitters and are largely based on animal model data, but the human brain is unique in its high myelin content and extended developmental (myelination) phase that continues until middle age. The biology of our exceptional myelination process and factors that influence it have been synthesized into a recently published myelin model of human brain evolution and normal development that cuts across the current symptom-based classification of neuropsychiatric disorders. The developmental perspective of the model suggests that dysregulations in the myelination process contribute to prevalent early-life neuropsychiatric disorders, as well as to addictions. These disorders share deficits in inhibitory control functions that likely contribute to their high rates of comorbidity with addiction and other impulsive behaviors. The model posits that substances such as alcohol and psychostimulants are toxic to the extremely vulnerable myelination process and contribute to the poor outcomes of primary and comorbid addictive disorders in susceptible individuals. By increasing the scientific focus on myelination, the model provides a rational biological framework for the development of novel, myelin-centered treatments that may have widespread efficacy across multiple disease states and could potentially be used in treating, delaying, or even preventing some of the most prevalent and devastating neuropsychiatric disorders. PMID:18668184

  11. Climate and dengue transmission: evidence and implications.

    PubMed

    Morin, Cory W; Comrie, Andrew C; Ernst, Kacey

    2013-01-01

    Climate influences dengue ecology by affecting vector dynamics, agent development, and mosquito/human interactions. Although these relationships are known, the impact climate change will have on transmission is unclear. Climate-driven statistical and process-based models are being used to refine our knowledge of these relationships and predict the effects of projected climate change on dengue fever occurrence, but results have been inconsistent. We sought to identify major climatic influences on dengue virus ecology and to evaluate the ability of climate-based dengue models to describe associations between climate and dengue, simulate outbreaks, and project the impacts of climate change. We reviewed the evidence for direct and indirect relationships between climate and dengue generated from laboratory studies, field studies, and statistical analyses of associations between vectors, dengue fever incidence, and climate conditions. We assessed the potential contribution of climate-driven, process-based dengue models and provide suggestions to improve their performance. Relationships between climate variables and factors that influence dengue transmission are complex. A climate variable may increase dengue transmission potential through one aspect of the system while simultaneously decreasing transmission potential through another. This complexity may at least partly explain inconsistencies in statistical associations between dengue and climate. Process-based models can account for the complex dynamics but often omit important aspects of dengue ecology, notably virus development and host-species interactions. Synthesizing and applying current knowledge of climatic effects on all aspects of dengue virus ecology will help direct future research and enable better projections of climate change effects on dengue incidence.

  12. Lower- Versus Higher-Income Populations In The Alternative Quality Contract: Improved Quality And Similar Spending

    PubMed Central

    Song, Zirui; Rose, Sherri; Chernew, Michael E.; Safran, Dana Gelb

    2018-01-01

    As population-based payment models become increasingly common, it is crucial to understand how such payment models affect health disparities. We evaluated health care quality and spending among enrollees in areas with lower versus higher socioeconomic status in Massachusetts before and after providers entered into the Alternative Quality Contract, a two-sided population-based payment model with substantial incentives tied to quality. We compared changes in process measures, outcome measures, and spending between enrollees in areas with lower and higher socioeconomic status from 2006 to 2012 (outcome measures were measured after the intervention only). Quality improved for all enrollees in the Alternative Quality Contract after their provider organizations entered the contract. Process measures improved 1.2 percentage points per year more among enrollees in areas with lower socioeconomic status than among those in areas with higher socioeconomic status. Outcome measure improvement was no different between the subgroups; neither were changes in spending. Larger or comparable improvements in quality among enrollees in areas with lower socioeconomic status suggest a potential narrowing of disparities. Strong pay-for-performance incentives within a population-based payment model could encourage providers to focus on improving quality for more disadvantaged populations. PMID:28069849

  13. Endpoints in medical communication research, proposing a framework of functions and outcomes.

    PubMed

    de Haes, Hanneke; Bensing, Jozien

    2009-03-01

    The evidence base of medical communication has been underdeveloped and the field was felt to be in need for thorough empirical investigation. Studying medical communication can help to clarify what happens during medical encounters and, subsequently, whether the behavior displayed is effective. However, before effectiveness can be established, one should argue what functions or goals the communication has and what outcomes are relevant in medical communication research. In the present paper, we first suggest the six function model of medical communication based on the integration of earlier models. The model distinguishes (1) fostering the relationship, (2) gathering information, (3) information provision, (4) decision making, (5) enabling disease and treatment-related behavior, and (6) responding to emotions. Secondly, a framework for endpoints in such research is presented. Immediate, intermediate and long-term outcomes are distinguished on the one hand and patient-, provider- and process- or context-related outcomes on the other. Based on this framework priorities can be defined and a tentative hierarchy proposed. Health is suggested to be the primary goal of medical communication as are patient-related outcomes. Dilemmas are described. Finally, in medical communication research, theory is advocated to link health care provider behavior or skills to outcomes and to connect intermediate outcomes to long-term ones. By linking specific communication elements to concrete endpoints within the six function model of medical communication, communication will become better integrated within the process of medical care. This is helpful to medical teachers and motivational to medical students. This approach can provide the place to medical communication it deserves in the center of medical care.

  14. A Fuzzy Cognitive Model of aeolian instability across the South Texas Sandsheet

    NASA Astrophysics Data System (ADS)

    Houser, C.; Bishop, M. P.; Barrineau, C. P.

    2014-12-01

    Characterization of aeolian systems is complicated by rapidly changing surface-process regimes, spatio-temporal scale dependencies, and subjective interpretation of imagery and spatial data. This paper describes the development and application of analytical reasoning to quantify instability of an aeolian environment using scale-dependent information coupled with conceptual knowledge of process and feedback mechanisms. Specifically, a simple Fuzzy Cognitive Model (FCM) for aeolian landscape instability was developed that represents conceptual knowledge of key biophysical processes and feedbacks. Model inputs include satellite-derived surface biophysical and geomorphometric parameters. FCMs are a knowledge-based Artificial Intelligence (AI) technique that merges fuzzy logic and neural computing in which knowledge or concepts are structured as a web of relationships that is similar to both human reasoning and the human decision-making process. Given simple process-form relationships, the analytical reasoning model is able to map the influence of land management practices and the geomorphology of the inherited surface on aeolian instability within the South Texas Sandsheet. Results suggest that FCMs can be used to formalize process-form relationships and information integration analogous to human cognition with future iterations accounting for the spatial interactions and temporal lags across the sand sheets.

  15. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  16. High-Quality 3d Models and Their Use in a Cultural Heritage Conservation Project

    NASA Astrophysics Data System (ADS)

    Tucci, G.; Bonora, V.; Conti, A.; Fiorini, L.

    2017-08-01

    Cultural heritage digitization and 3D modelling processes are mainly based on laser scanning and digital photogrammetry techniques to produce complete, detailed and photorealistic three-dimensional surveys: geometric as well as chromatic aspects, in turn testimony of materials, work techniques, state of preservation, etc., are documented using digitization processes. The paper explores the topic of 3D documentation for conservation purposes; it analyses how geomatics contributes in different steps of a restoration process and it presents an overview of different uses of 3D models for the conservation and enhancement of the cultural heritage. The paper reports on the project to digitize the earthenware frieze of the Ospedale del Ceppo in Pistoia (Italy) for 3D documentation, restoration work support, and digital and physical reconstruction and integration purposes. The intent to design an exhibition area suggests new ways to take advantage of 3D data originally acquired for documentation and scientific purposes.

  17. The Role of Mindfulness in Positive Reappraisal

    PubMed Central

    Garland, Eric; Gaylord, Susan; Park, Jongbae

    2009-01-01

    Mindfulness meditation is increasingly well known for therapeutic efficacy in a variety of illnesses and conditions, but its mechanism of action is still under debate in scientific circles. In this paper we propose a hypothetical causal model that argues for the role of mindfulness in positive reappraisal coping. Positive reappraisal is a critical component of meaning-based coping that enables individuals to adapt successfully to stressful life events. Mindfulness, as a metacognitive form of awareness, involves the process of decentering, a shifting of cognitive sets that enables alternate appraisals of life events. We review the concept of positive reappraisal in transactional stress and coping theory; then describe research and traditional literature related to mindfulness and cognitive reappraisal, and detail the central role of mindfulness in the reappraisal process. With this understanding, we present a causal model explicating the proposed mechanism. The discussion has implications for clinical practice, suggesting how mindfulness-based integrative medicine interventions can be designed to support adaptive coping processes. PMID:19114262

  18. Modeling the internal dynamics of energy and mass transfer in an imperfectly mixed ventilated airspace.

    PubMed

    Janssens, K; Van Brecht, A; Zerihun Desta, T; Boonen, C; Berckmans, D

    2004-06-01

    The present paper outlines a modeling approach, which has been developed to model the internal dynamics of heat and moisture transfer in an imperfectly mixed ventilated airspace. The modeling approach, which combines the classical heat and moisture balance differential equations with the use of experimental time-series data, provides a physically meaningful description of the process and is very useful for model-based control purposes. The paper illustrates how the modeling approach has been applied to a ventilated laboratory test room with internal heat and moisture production. The results are evaluated and some valuable suggestions for future research are forwarded. The modeling approach outlined in this study provides an ideal form for advanced model-based control system design. The relatively low number of parameters makes it well suited for model-based control purposes, as a limited number of identification experiments is sufficient to determine these parameters. The model concept provides information about the air quality and airflow pattern in an arbitrary building. By using this model as a simulation tool, the indoor air quality and airflow pattern can be optimized.

  19. Modeling regeneration responses of big sagebrush (Artemisia tridentata) to abiotic conditions

    USGS Publications Warehouse

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Ecosystems dominated by big sagebrush, Artemisia tridentata Nuttall (Asteraceae), which are the most widespread ecosystems in semiarid western North America, have been affected by land use practices and invasive species. Loss of big sagebrush and the decline of associated species, such as greater sage-grouse, are a concern to land managers and conservationists. However, big sagebrush regeneration remains difficult to achieve by restoration and reclamation efforts and there is no regeneration simulation model available. We present here the first process-based, daily time-step, simulation model to predict yearly big sagebrush regeneration including relevant germination and seedling responses to abiotic factors. We estimated values, uncertainty, and importance of 27 model parameters using a total of 1435 site-years of observation. Our model explained 74% of variability of number of years with successful regeneration at 46 sites. It also achieved 60% overall accuracy predicting yearly regeneration success/failure. Our results identify specific future research needed to improve our understanding of big sagebrush regeneration, including data at the subspecies level and improved parameter estimates for start of seed dispersal, modified wet thermal-time model of germination, and soil water potential influences. We found that relationships between big sagebrush regeneration and climate conditions were site specific, varying across the distribution of big sagebrush. This indicates that statistical models based on climate are unsuitable for understanding range-wide regeneration patterns or for assessing the potential consequences of changing climate on sagebrush regeneration and underscores the value of this process-based model. We used our model to predict potential regeneration across the range of sagebrush ecosystems in the western United States, which confirmed that seedling survival is a limiting factor, whereas germination is not. Our results also suggested that modeled regeneration suitability is necessary but not sufficient to explain sagebrush presence. We conclude that future assessment of big sagebrush responses to climate change will need to account for responses of regenerative stages using a process-based understanding, such as provided by our model.

  20. A model to predict accommodations needed by disabled persons.

    PubMed

    Babski-Reeves, Kari; Williams, Sabrina; Waters, Tzer Nan; Crumpton-Young, Lesia L; McCauley-Bell, Pamela

    2005-09-01

    In this paper, several approaches to assist employers in the accommodation process for disabled employees are discussed and a mathematical model is proposed to assist employers in predicting the accommodation level needed by an individual with a mobility-related disability. This study investigates the validity and reliability of this model in assessing the accommodation level needed by individuals utilizing data collected from twelve individuals with mobility-related disabilities. Based on the results of the statistical analyses, this proposed model produces a feasible preliminary measure for assessing the accommodation level needed for persons with mobility-related disabilities. Suggestions for practical application of this model in an industrial setting are addressed.

  1. Functional response and capture timing in an individual-based model: predation by northern squawfish (Ptychocheilus oregonensis) on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, James H.; DeAngelis, Donald L.

    1992-01-01

    The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.

  2. The Livingstone Model of a Main Propulsion System

    NASA Technical Reports Server (NTRS)

    Bajwa, Anupa; Sweet, Adam; Korsmeyer, David (Technical Monitor)

    2003-01-01

    Livingstone is a discrete, propositional logic-based inference engine that has been used for diagnosis of physical systems. We present a component-based model of a Main Propulsion System (MPS) and say how it is used with Livingstone (L2) in order to implement a diagnostic system for integrated vehicle health management (IVHM) for the Propulsion IVHM Technology Experiment (PITEX). We start by discussing the process of conceptualizing such a model. We describe graphical tools that facilitated the generation of the model. The model is composed of components (which map onto physical components), connections between components and constraints. A component is specified by variables, with a set of discrete, qualitative values for each variable in its local nominal and failure modes. For each mode, the model specifies the component's behavior and transitions. We describe the MPS components' nominal and fault modes and associated Livingstone variables and data structures. Given this model, and observed external commands and observations from the system, Livingstone tracks the state of the MPS over discrete time-steps by choosing trajectories that are consistent with observations. We briefly discuss how the compiled model fits into the overall PITEX architecture. Finally we summarize our modeling experience, discuss advantages and disadvantages of our approach, and suggest enhancements to the modeling process.

  3. Harmonization of reimbursement and regulatory approval processes: a systematic review of international experiences.

    PubMed

    Tsoi, Bernice; Masucci, Lisa; Campbell, Kaitryn; Drummond, Michael; O'Reilly, Daria; Goeree, Ron

    2013-08-01

    A considerable degree of overlap exists between reimbursement and regulatory approval of health technologies, and harmonization of certain aspects is both possible and feasible. Various models to harmonization have been suggested in which a number of practical attempts have been drawn from. Based on a review of the literature, approaches can be categorized into those focused on reducing uncertainty and developing economies of scale in the evidentiary requirements; and/or aligning timeframes and logistical aspects of the review process. These strategies can further be classified based on the expected level of structural and organizational change required to implement them into the existing processes. Passive processes require less modification, whereas active processes are associated with greater restructuring. Attempts so far at harmonization have raised numerous legal and practical issues and these must be considered when introducing a more harmonized framework into the existing regulatory and reimbursement arrangements.

  4. Two-step infiltration of aluminum melts into Al-Ti-B4C-CuO powder mixture pellets

    NASA Astrophysics Data System (ADS)

    Zhang, Jingjing; Lee, Jung-Moo; Cho, Young-Hee; Kim, Su-Hyeon; Yu, Huashun

    2016-03-01

    Aluminum matrix composites with a high volume fraction of B4C and TiB2 were fabricated by a novel processing technique - a quick spontaneous infiltration process. The process combines a pressureless infiltration with the combustion reaction of Al-Ti-B4C-CuO in molten aluminum. The process is realized in a simple and economical way in which the whole process is performed in air in a few minutes. To verify the rapidity of the process, the infiltration kinetics was calculated based on the Washburn equation in which melt flows into a porous skeleton. However, there was a noticeable deviation from the calculated results with the experimental results. Considering the cross-sections of the samples at different processing times, a new infiltration model (two step infiltration) consisting of macro-infiltration and micro-infiltration is suggested. The calculated kinetics results in light of the proposed model agree well with the experimental results.

  5. Temporal pattern processing in songbirds.

    PubMed

    Comins, Jordan A; Gentner, Timothy Q

    2014-10-01

    Understanding how the brain perceives, organizes and uses patterned information is directly related to the neurobiology of language. Given the present limitations, such knowledge at the scale of neurons, neural circuits and neural populations can only come from non-human models, focusing on shared capacities that are relevant to language processing. Here we review recent advances in the behavioral and neural basis of temporal pattern processing of natural auditory communication signals in songbirds, focusing on European starlings. We suggest a general inhibitory circuit for contextual modulation that can act to control sensory representations based on patterning rules. Copyright © 2014. Published by Elsevier Ltd.

  6. Investigating impacts of natural and human-induced environmental changes on hydrological processes and flood hazards using a GIS-based hydrological/hydraulic model and remote sensing data

    NASA Astrophysics Data System (ADS)

    Wang, Lei

    Natural and human-induced environmental changes have been altering the earth's surface and hydrological processes, and thus directly contribute to the severity of flood hazards. To understand these changes and their impacts, this research developed a GIS-based hydrological and hydraulic modeling system, which incorporates state-of-the-art remote sensing data to simulate flood under various scenarios. The conceptual framework and technical issues of incorporating multi-scale remote sensing data have been addressed. This research develops an object-oriented hydrological modeling framework. Compared with traditional lumped or cell-based distributed hydrological modeling frameworks, the object-oriented framework allows basic spatial hydrologic units to have various size and irregular shape. This framework is capable of assimilating various GIS and remotely-sensed data with different spatial resolutions. It ensures the computational efficiency, while preserving sufficient spatial details of input data and model outputs. Sensitivity analysis and comparison of high resolution LIDAR DEM with traditional USGS 30m resolution DEM suggests that the use of LIDAR DEMs can greatly reduce uncertainty in calibration of flow parameters in the hydrologic model and hence increase the reliability of modeling results. In addition, subtle topographic features and hydrologic objects like surface depressions and detention basins can be extracted from the high resolution LiDAR DEMs. An innovative algorithm has been developed to efficiently delineate surface depressions and detention basins from LiDAR DEMs. Using a time series of Landsat images, a retrospective analysis of surface imperviousness has been conducted to assess the hydrologic impact of urbanization. The analysis reveals that with rapid urbanization the impervious surface has been increased from 10.1% to 38.4% for the case study area during 1974--2002. As a result, the peak flow for a 100-year flood event has increased by 20% and the floodplain extent has expanded by about 21.6%. The quantitative analysis suggests that the large regional detentions basins have effectively offset the adverse effect of increased impervious surface during the urbanization process. Based on the simulation and scenario analyses of land subsidence and potential climate changes, some planning measures and policy implications have been derived for guiding smart urban growth and sustainable resource development and management to minimize flood hazards.

  7. Physician-based activity counseling: intervention effects on mediators of motivational readiness for physical activity.

    PubMed

    Pinto, B M; Lynn, H; Marcus, B H; DePue, J; Goldstein, M G

    2001-01-01

    In theory-based interventions for behavior change, there is a need to examine the effects of interventions on the underlying theoretical constructs and the mediating role of such constructs. These two questions are addressed in the Physically Active for Life study, a randomized trial of physician-based exercise counseling for older adults. Three hundred fifty-five patients participated (intervention n = 181, control n = 174; mean age = 65.6 years). The underlying theories used were the Transtheoretical Model, Social Cognitive Theory and the constructs of decisional balance (benefits and barriers), self-efficacy, and behavioral and cognitive processes of change. Motivational readiness for physical activity and related constructs were assessed at baseline, 6 weeks, and 8 months. Linear or logistic mixed effects models were used to examine intervention effects on the constructs, and logistic mixed effects models were used for mediator analyses. At 6 weeks, the intervention had significant effects on decisional balance, self-efficacy, and behavioral processes, but these effects were not maintained at 8 months. At 6 weeks, only decisional balance and behavioral processes were identified as mediators of motivational readiness outcomes. Results suggest that interventions of greater intensity and duration may be needed for sustained changes in mediators and motivational readiness for physical activity among older adults.

  8. In vivo quantitative bioluminescence tomography using heterogeneous and homogeneous mouse models.

    PubMed

    Liu, Junting; Wang, Yabin; Qu, Xiaochao; Li, Xiangsi; Ma, Xiaopeng; Han, Runqiang; Hu, Zhenhua; Chen, Xueli; Sun, Dongdong; Zhang, Rongqing; Chen, Duofang; Chen, Dan; Chen, Xiaoyuan; Liang, Jimin; Cao, Feng; Tian, Jie

    2010-06-07

    Bioluminescence tomography (BLT) is a new optical molecular imaging modality, which can monitor both physiological and pathological processes by using bioluminescent light-emitting probes in small living animal. Especially, this technology possesses great potential in drug development, early detection, and therapy monitoring in preclinical settings. In the present study, we developed a dual modality BLT prototype system with Micro-computed tomography (MicroCT) registration approach, and improved the quantitative reconstruction algorithm based on adaptive hp finite element method (hp-FEM). Detailed comparisons of source reconstruction between the heterogeneous and homogeneous mouse models were performed. The models include mice with implanted luminescence source and tumor-bearing mice with firefly luciferase report gene. Our data suggest that the reconstruction based on heterogeneous mouse model is more accurate in localization and quantification than the homogeneous mouse model with appropriate optical parameters and that BLT allows super-early tumor detection in vivo based on tomographic reconstruction of heterogeneous mouse model signal.

  9. Technical and economic feasibility of integrated video service by satellite

    NASA Technical Reports Server (NTRS)

    Price, Kent M.; Garlow, R. K.; Henderson, T. R.; Kwan, Robert K.; White, L. W.

    1992-01-01

    The trends and roles of satellite based video services in the year 2010 time frame are examined based on an overall network and service model for that period. Emphasis is placed on point to point and multipoint service, but broadcast could also be accommodated. An estimate of the video traffic is made and the service and general network requirements are identified. User charges are then estimated based on several usage scenarios. In order to accommodate these traffic needs, a 28 spot beam satellite architecture with on-board processing and signal mixing is suggested.

  10. Microcomputer-based classification of environmental data in municipal areas

    NASA Astrophysics Data System (ADS)

    Thiergärtner, H.

    1995-10-01

    Multivariate data-processing methods used in mineral resource identification can be used to classify urban regions. Using elements of expert systems, geographical information systems, as well as known classification and prognosis systems, it is possible to outline a single model that consists of resistant and of temporary parts of a knowledge base including graphical input and output treatment and of resistant and temporary elements of a bank of methods and algorithms. Whereas decision rules created by experts will be stored in expert systems directly, powerful classification rules in form of resistant but latent (implicit) decision algorithms may be implemented in the suggested model. The latent functions will be transformed into temporary explicit decision rules by learning processes depending on the actual task(s), parameter set(s), pixels selection(s), and expert control(s). This takes place both at supervised and nonsupervised classification of multivariately described pixel sets representing municipal subareas. The model is outlined briefly and illustrated by results obtained in a target area covering a part of the city of Berlin (Germany).

  11. Lignocellulosic biorefinery as a model for sustainable development of biofuels and value added products.

    PubMed

    De Bhowmick, Goldy; Sarmah, Ajit K; Sen, Ramkrishna

    2018-01-01

    A constant shift of society's dependence from petroleum-based energy resources towards renewable biomass-based has been the key to tackle the greenhouse gas emissions. Effective use of biomass feedstock, particularly lignocellulosic, has gained worldwide attention lately. Lignocellulosic biomass as a potent bioresource, however, cannot be a sustainable alternative if the production cost is too high and/ or the availability is limited. Recycling the lignocellulosic biomass from various sources into value added products such as bio-oil, biochar or other biobased chemicals in a bio-refinery model is a sensible idea. Combination of integrated conversion techniques along with process integration is suggested as a sustainable approach. Introducing 'series concept' accompanying intermittent dark/photo fermentation with co-cultivation of microalgae is conceptualised. While the cost of downstream processing for a single type of feedstock would be high, combining different feedstocks and integrating them in a bio-refinery model would lessen the production cost and reduce CO 2 emission. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Emergent neutrality drives phytoplankton species coexistence

    PubMed Central

    Segura, Angel M.; Calliari, Danilo; Kruk, Carla; Conde, Daniel; Bonilla, Sylvia; Fort, Hugo

    2011-01-01

    The mechanisms that drive species coexistence and community dynamics have long puzzled ecologists. Here, we explain species coexistence, size structure and diversity patterns in a phytoplankton community using a combination of four fundamental factors: organism traits, size-based constraints, hydrology and species competition. Using a ‘microscopic’ Lotka–Volterra competition (MLVC) model (i.e. with explicit recipes to compute its parameters), we provide a mechanistic explanation of species coexistence along a niche axis (i.e. organismic volume). We based our model on empirically measured quantities, minimal ecological assumptions and stochastic processes. In nature, we found aggregated patterns of species biovolume (i.e. clumps) along the volume axis and a peak in species richness. Both patterns were reproduced by the MLVC model. Observed clumps corresponded to niche zones (volumes) where species fitness was highest, or where fitness was equal among competing species. The latter implies the action of equalizing processes, which would suggest emergent neutrality as a plausible mechanism to explain community patterns. PMID:21177680

  13. The Scope of Usage-Based Theory

    PubMed Central

    Ibbotson, Paul

    2013-01-01

    Usage-based approaches typically draw on a relatively small set of cognitive processes, such as categorization, analogy, and chunking to explain language structure and function. The goal of this paper is to first review the extent to which the “cognitive commitment” of usage-based theory has had success in explaining empirical findings across domains, including language acquisition, processing, and typology. We then look at the overall strengths and weaknesses of usage-based theory and highlight where there are significant debates. Finally, we draw special attention to a set of culturally generated structural patterns that seem to lie beyond the explanation of core usage-based cognitive processes. In this context we draw a distinction between cognition permitting language structure vs. cognition entailing language structure. As well as addressing the need for greater clarity on the mechanisms of generalizations and the fundamental units of grammar, we suggest that integrating culturally generated structures within existing cognitive models of use will generate tighter predictions about how language works. PMID:23658552

  14. Impacts of Subgrid Heterogeneous Mixing between Cloud Liquid and Ice on the Wegner-Bergeron-Findeisen Process and Mixed-phase Clouds in NCAR CAM5

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, M.; Zhang, D.; Wang, Z.; Wang, Y.

    2017-12-01

    Mixed-phase clouds are persistently observed over the Arctic and the phase partitioning between cloud liquid and ice hydrometeors in mixed-phase clouds has important impacts on the surface energy budget and Arctic climate. In this study, we test the NCAR Community Atmosphere Model Version 5 (CAM5) with the single-column and weather forecast configurations and evaluate the model performance against observation data from the DOE Atmospheric Radiation Measurement (ARM) Program's M-PACE field campaign in October 2004 and long-term ground-based multi-sensor remote sensing measurements. Like most global climate models, we find that CAM5 also poorly simulates the phase partitioning in mixed-phase clouds by significantly underestimating the cloud liquid water content. Assuming pocket structures in the distribution of cloud liquid and ice in mixed-phase clouds as suggested by in situ observations provides a plausible solution to improve the model performance by reducing the Wegner-Bergeron-Findeisen (WBF) process rate. In this study, the modification of the WBF process in the CAM5 model has been achieved with applying a stochastic perturbation to the time scale of the WBF process relevant to both ice and snow to account for the heterogeneous mixture of cloud liquid and ice. Our results show that this modification of WBF process improves the modeled phase partitioning in the mixed-phase clouds. The seasonal variation of mixed-phase cloud properties is also better reproduced in the model in comparison with the long-term ground-based remote sensing observations. Furthermore, the phase partitioning is insensitive to the reassignment time step of perturbations.

  15. The Meeting Point: Where Language Production and Working Memory Share Resources.

    PubMed

    Ishkhanyan, Byurakn; Boye, Kasper; Mogensen, Jesper

    2018-06-07

    The interaction between working memory and language processing is widely discussed in cognitive research. However, those studies often explore the relationship between language comprehension and working memory (WM). The role of WM is rarely considered in language production, despite some evidence suggesting a relationship between the two cognitive systems. This study attempts to fill that gap by using a complex span task during language production. We make our predictions based on the reorganization of elementary functions neurocognitive model, a usage based theory about grammatical status, and language production models. In accordance with these theories, we expect an overlap between language production and WM at one or more levels of language planning. Our results show that WM is involved at the phonological encoding level of language production and that adding WM load facilitates language production, which leads us to suggest that an extra task-specific storage is being created while the task is performed.

  16. Loss Aversion Reflects Information Accumulation, Not Bias: A Drift-Diffusion Model Study.

    PubMed

    Clay, Summer N; Clithero, John A; Harris, Alison M; Reed, Catherine L

    2017-01-01

    Defined as increased sensitivity to losses, loss aversion is often conceptualized as a cognitive bias. However, findings that loss aversion has an attentional or emotional regulation component suggest that it may instead reflect differences in information processing. To distinguish these alternatives, we applied the drift-diffusion model (DDM) to choice and response time (RT) data in a card gambling task with unknown risk distributions. Loss aversion was measured separately for each participant. Dividing the participants into terciles based on loss aversion estimates, we found that the most loss-averse group showed a significantly lower drift rate than the other two groups, indicating overall slower uptake of information. In contrast, neither the starting bias nor the threshold separation (barrier) varied by group, suggesting that decision thresholds are not affected by loss aversion. These results shed new light on the cognitive mechanisms underlying loss aversion, consistent with an account based on information accumulation.

  17. Loss Aversion Reflects Information Accumulation, Not Bias: A Drift-Diffusion Model Study

    PubMed Central

    Clay, Summer N.; Clithero, John A.; Harris, Alison M.; Reed, Catherine L.

    2017-01-01

    Defined as increased sensitivity to losses, loss aversion is often conceptualized as a cognitive bias. However, findings that loss aversion has an attentional or emotional regulation component suggest that it may instead reflect differences in information processing. To distinguish these alternatives, we applied the drift-diffusion model (DDM) to choice and response time (RT) data in a card gambling task with unknown risk distributions. Loss aversion was measured separately for each participant. Dividing the participants into terciles based on loss aversion estimates, we found that the most loss-averse group showed a significantly lower drift rate than the other two groups, indicating overall slower uptake of information. In contrast, neither the starting bias nor the threshold separation (barrier) varied by group, suggesting that decision thresholds are not affected by loss aversion. These results shed new light on the cognitive mechanisms underlying loss aversion, consistent with an account based on information accumulation. PMID:29066987

  18. Pitch and Plasticity: Insights from the Pitch Matching of Chords by Musicians with Absolute and Relative Pitch

    PubMed Central

    McLachlan, Neil M.; Marco, David J. T.; Wilson, Sarah J.

    2013-01-01

    Absolute pitch (AP) is a form of sound recognition in which musical note names are associated with discrete musical pitch categories. The accuracy of pitch matching by non-AP musicians for chords has recently been shown to depend on stimulus familiarity, pointing to a role of spectral recognition mechanisms in the early stages of pitch processing. Here we show that pitch matching accuracy by AP musicians was also dependent on their familiarity with the chord stimulus. This suggests that the pitch matching abilities of both AP and non-AP musicians for concurrently presented pitches are dependent on initial recognition of the chord. The dual mechanism model of pitch perception previously proposed by the authors suggests that spectral processing associated with sound recognition primes waveform processing to extract stimulus periodicity and refine pitch perception. The findings presented in this paper are consistent with the dual mechanism model of pitch, and in the case of AP musicians, the formation of nominal pitch categories based on both spectral and periodicity information. PMID:24961624

  19. Distinct Processes Drive Diversification in Different Clades of Gesneriaceae.

    PubMed

    Roalson, Eric H; Roberts, Wade R

    2016-07-01

    Using a time-calibrated phylogenetic hypothesis including 768 Gesneriaceae species (out of [Formula: see text]3300 species) and more than 29,000 aligned bases from 26 gene regions, we test Gesneriaceae for diversification rate shifts and the possible proximal drivers of these shifts: geographic distributions, growth forms, and pollination syndromes. Bayesian Analysis of Macroevolutionary Mixtures analyses found five significant rate shifts in Beslerieae, core Nematanthus, core Columneinae, core Streptocarpus, and Pacific Cyrtandra These rate shifts correspond with shifts in diversification rates, as inferred by Binary State Speciation and Extinction Model and Geographic State Speciation and Extinction model, associated with hummingbird pollination, epiphytism, unifoliate growth, and geographic area. Our results suggest that diversification processes are extremely variable across Gesneriaceae clades with different combinations of characters influencing diversification rates in different clades. Diversification patterns between New and Old World lineages show dramatic differences, suggesting that the processes of diversification in Gesneriaceae are very different in these two geographic regions. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. A neural model of figure-ground organization.

    PubMed

    Craft, Edward; Schütze, Hartmut; Niebur, Ernst; von der Heydt, Rüdiger

    2007-06-01

    Psychophysical studies suggest that figure-ground organization is a largely autonomous process that guides--and thus precedes--allocation of attention and object recognition. The discovery of border-ownership representation in single neurons of early visual cortex has confirmed this view. Recent theoretical studies have demonstrated that border-ownership assignment can be modeled as a process of self-organization by lateral interactions within V2 cortex. However, the mechanism proposed relies on propagation of signals through horizontal fibers, which would result in increasing delays of the border-ownership signal with increasing size of the visual stimulus, in contradiction with experimental findings. It also remains unclear how the resulting border-ownership representation would interact with attention mechanisms to guide further processing. Here we present a model of border-ownership coding based on dedicated neural circuits for contour grouping that produce border-ownership assignment and also provide handles for mechanisms of selective attention. The results are consistent with neurophysiological and psychophysical findings. The model makes predictions about the hypothetical grouping circuits and the role of feedback between cortical areas.

  1. Gene tree rooting methods give distributions that mimic the coalescent process.

    PubMed

    Tian, Yuan; Kubatko, Laura S

    2014-01-01

    Multi-locus phylogenetic inference is commonly carried out via models that incorporate the coalescent process to model the possibility that incomplete lineage sorting leads to incongruence between gene trees and the species tree. An interesting question that arises in this context is whether data "fit" the coalescent model. Previous work (Rosenfeld et al., 2012) has suggested that rooting of gene trees may account for variation in empirical data that has been previously attributed to the coalescent process. We examine this possibility using simulated data. We show that, in the case of four taxa, the distribution of gene trees observed from rooting estimated gene trees with either the molecular clock or with outgroup rooting can be closely matched by the distribution predicted by the coalescent model with specific choices of species tree branch lengths. We apply commonly-used coalescent-based methods of species tree inference to assess their performance in these situations. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Dynamic effects of root system architecture improve root water uptake in 1-D process-based soil-root hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bouda, Martin; Saiers, James E.

    2017-12-01

    Root system architecture (RSA) can significantly affect plant access to water, total transpiration, as well as its partitioning by soil depth, with implications for surface heat, water, and carbon budgets. Despite recent advances in land surface model (LSM) descriptions of plant hydraulics, descriptions of RSA have not been included because of their three-dimensional complexity, which makes them generally too computationally costly. Here we demonstrate a new, process-based 1D layered model that captures the dynamic shifts in water potential gradients of 3D RSA under different soil moisture conditions: the RSA stencil. Using root systems calibrated to the rooting profiles of four plant functional types (PFT) of the Community Land Model, we show that the RSA stencil predicts plant water potentials within 2% to the outputs of a full 3D model, under the same assumptions on soil moisture heterogeneity, despite its trivial computational cost, resulting in improved predictions of water uptake and soil moisture compared to a model without RSA in a transient simulation. Our results suggest that LSM predictions of soil moisture dynamics and dependent variables can be improved by the implementation of this model, calibrated for individual PFTs using field observations.

  3. A Carbon Cycle Model for the Social-Ecological Process in Coastal Wetland: A Case Study on Gouqi Island, East China

    PubMed Central

    Xiong, Lihu; Zhu, Wenjia

    2017-01-01

    Coastal wetlands offer many important ecosystem services both in natural and in social systems. How to simultaneously decrease the destructive effects flowing from human activities and maintaining the sustainability of regional wetland ecosystems are an important issue for coastal wetlands zones. We use carbon credits as the basis for regional sustainable developing policy-making. With the case of Gouqi Island, a typical coastal wetlands zone that locates in the East China Sea, a carbon cycle model was developed to illustrate the complex social-ecological processes. Carbon-related processes in natural ecosystem, primary industry, secondary industry, tertiary industry, and residents on the island were identified in the model. The model showed that 36780 tons of carbon is released to atmosphere with the form of CO2, and 51240 tons of carbon is captured by the ecosystem in 2014 and the three major resources of carbon emission are transportation and tourism development and seawater desalination. Based on the carbon-related processes and carbon balance, we proposed suggestions on the sustainable development strategy of Gouqi Island as coastal wetlands zone. PMID:28286690

  4. Assessing the utility of the willingness/prototype model in predicting help-seeking decisions.

    PubMed

    Hammer, Joseph H; Vogel, David L

    2013-01-01

    Prior research on professional psychological help-seeking behavior has operated on the assumption that the decision to seek help is based on intentional and reasoned processes. However, research on the dual-process prototype/willingness model (PWM; Gerrard, Gibbons, Houlihan, Stock, & Pomery, 2008) suggests health-related decisions may also involve social reaction processes that influence one's spontaneous willingness (rather than planned intention) to seek help, given conducive circumstances. The present study used structural equation modeling to evaluate the ability of these 2 information-processing pathways (i.e., the reasoned pathway and the social reaction pathway) to predict help-seeking decisions among 182 college students currently experiencing clinical levels of psychological distress. Results indicated that when both pathways were modeled simultaneously, only the social reaction pathway independently accounted for significant variance in help-seeking decisions. These findings argue for the utility of the PWM framework in the context of professional psychological help seeking and hold implications for future counseling psychology research, prevention, and practice. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  5. A Carbon Cycle Model for the Social-Ecological Process in Coastal Wetland: A Case Study on Gouqi Island, East China.

    PubMed

    Li, Yanxia; Xiong, Lihu; Zhu, Wenjia

    2017-01-01

    Coastal wetlands offer many important ecosystem services both in natural and in social systems. How to simultaneously decrease the destructive effects flowing from human activities and maintaining the sustainability of regional wetland ecosystems are an important issue for coastal wetlands zones. We use carbon credits as the basis for regional sustainable developing policy-making. With the case of Gouqi Island, a typical coastal wetlands zone that locates in the East China Sea, a carbon cycle model was developed to illustrate the complex social-ecological processes. Carbon-related processes in natural ecosystem, primary industry, secondary industry, tertiary industry, and residents on the island were identified in the model. The model showed that 36780 tons of carbon is released to atmosphere with the form of CO 2 , and 51240 tons of carbon is captured by the ecosystem in 2014 and the three major resources of carbon emission are transportation and tourism development and seawater desalination. Based on the carbon-related processes and carbon balance, we proposed suggestions on the sustainable development strategy of Gouqi Island as coastal wetlands zone.

  6. Comparative evaluation of urban storm water quality models

    NASA Astrophysics Data System (ADS)

    Vaze, J.; Chiew, Francis H. S.

    2003-10-01

    The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.

  7. Statistical synthesis of contextual knowledge to increase the effectiveness of theory-based behaviour change interventions.

    PubMed

    Hanbury, Andria; Thompson, Carl; Mannion, Russell

    2011-07-01

    Tailored implementation strategies targeting health professionals' adoption of evidence-based recommendations are currently being developed. Research has focused on how to select an appropriate theoretical base, how to use that theoretical base to explore the local context, and how to translate theoretical constructs associated with the key factors found to influence innovation adoption into feasible and tailored implementation strategies. The reasons why an intervention is thought not to have worked are often cited as being: inappropriate choice of theoretical base; unsystematic development of the implementation strategies; and a poor evidence base to guide the process. One area of implementation research that is commonly overlooked is how to synthesize the data collected in a local context in order to identify what factors to target with the implementation strategies. This is suggested to be a critical process in the development of a theory-based intervention. The potential of multilevel modelling techniques to synthesize data collected at different hierarchical levels, for example, individual attitudes and team level variables, is discussed. Future research is needed to explore further the potential of multilevel modelling for synthesizing contextual data in implementation studies, as well as techniques for synthesizing qualitative and quantitative data.

  8. Self-organizing map models of language acquisition

    PubMed Central

    Li, Ping; Zhao, Xiaowei

    2013-01-01

    Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061

  9. Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Brandon, Jay M.

    2017-01-01

    Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.

  10. Semantics-Based Interoperability Framework for the Geosciences

    NASA Astrophysics Data System (ADS)

    Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.

    2008-12-01

    Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will facilitate the integrative capabilities of scientists as we examine the relationships between data and external factors such as processes that may influence our understanding of "why" certain events happen. We emphasize the need to go from analysis of data to concepts related to scientific principles of thermodynamics, kinetics, heat flow, mass transfer, etc. Towards meeting these objectives, we report on a pair of related service engines: DIA (Discovery, integration and analysis), and SEDRE (Semantically-Enabled Data Registration Engine) that utilize ontologies for semantic interoperability and integration.

  11. Stimulus-response compatibility and psychological refractory period effects: implications for response selection

    NASA Technical Reports Server (NTRS)

    Lien, Mei-Ching; Proctor, Robert W.

    2002-01-01

    The purpose of this paper was to provide insight into the nature of response selection by reviewing the literature on stimulus-response compatibility (SRC) effects and the psychological refractory period (PRP) effect individually and jointly. The empirical findings and theoretical explanations of SRC effects that have been studied within a single-task context suggest that there are two response-selection routes-automatic activation and intentional translation. In contrast, all major PRP models reviewed in this paper have treated response selection as a single processing stage. In particular, the response-selection bottleneck (RSB) model assumes that the processing of Task 1 and Task 2 comprises two separate streams and that the PRP effect is due to a bottleneck located at response selection. Yet, considerable evidence from studies of SRC in the PRP paradigm shows that the processing of the two tasks is more interactive than is suggested by the RSB model and by most other models of the PRP effect. The major implication drawn from the studies of SRC effects in the PRP context is that response activation is a distinct process from final response selection. Response activation is based on both long-term and short-term task-defined S-R associations and occurs automatically and in parallel for the two tasks. The final response selection is an intentional act required even for highly compatible and practiced tasks and is restricted to processing one task at a time. Investigations of SRC effects and response-selection variables in dual-task contexts should be conducted more systematically because they provide significant insight into the nature of response-selection mechanisms.

  12. Prospect theory reflects selective allocation of attention.

    PubMed

    Pachur, Thorsten; Schulte-Mecklenbeck, Michael; Murphy, Ryan O; Hertwig, Ralph

    2018-02-01

    There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Bioplausible multiscale filtering in retino-cortical processing as a mechanism in perceptual grouping.

    PubMed

    Nematzadeh, Nasim; Powers, David M W; Lewis, Trent W

    2017-12-01

    Why does our visual system fail to reconstruct reality, when we look at certain patterns? Where do Geometrical illusions start to emerge in the visual pathway? How far should we take computational models of vision with the same visual ability to detect illusions as we do? This study addresses these questions, by focusing on a specific underlying neural mechanism involved in our visual experiences that affects our final perception. Among many types of visual illusion, 'Geometrical' and, in particular, 'Tilt Illusions' are rather important, being characterized by misperception of geometric patterns involving lines and tiles in combination with contrasting orientation, size or position. Over the last decade, many new neurophysiological experiments have led to new insights as to how, when and where retinal processing takes place, and the encoding nature of the retinal representation that is sent to the cortex for further processing. Based on these neurobiological discoveries, we provide computer simulation evidence from modelling retinal ganglion cells responses to some complex Tilt Illusions, suggesting that the emergence of tilt in these illusions is partially related to the interaction of multiscale visual processing performed in the retina. The output of our low-level filtering model is presented for several types of Tilt Illusion, predicting that the final tilt percept arises from multiple-scale processing of the Differences of Gaussians and the perceptual interaction of foreground and background elements. The model is a variation of classical receptive field implementation for simple cells in early stages of vision with the scales tuned to the object/texture sizes in the pattern. Our results suggest that this model has a high potential in revealing the underlying mechanism connecting low-level filtering approaches to mid- and high-level explanations such as 'Anchoring theory' and 'Perceptual grouping'.

  14. From service provision to function based performance - perspectives on public health systems from the USA and Israel

    PubMed Central

    2012-01-01

    If public health agencies are to fulfill their overall mission, they need to have defined measurable targets and should structure services to reach these targets, rather than offer a combination of ill-targeted programs. In order to do this, it is essential that there be a clear definition of what public health should do- a definition that does not ebb and flow based upon the prevailing political winds, but rather is based upon professional standards and measurements. The establishment of the Essential Public Health Services framework in the U.S.A. was a major move in that direction, and the model, or revisions of the model, have been adopted beyond the borders of the U.S. This article reviews the U.S. public health system, the needs and processes which brought about the development of the 10 Essential Public Health Services (EPHS), and historical and contemporary applications of the model. It highlights the value of establishing a common delineation of public health activities such as those contained in the EPHS, and explores the validity of using the same process in other countries through a discussion of the development in Israel of a similar model, the 10 Public Health Essential Functions (PHEF), that describes the activities of Israel’s public health system. The use of the same process and framework to develop similar yet distinct frameworks suggests that the process has wide applicability, and may be beneficial to any public health system. Once a model is developed, it can be used to measure public health performance and improve the quality of services delivered through the development of standards and measures based upon the model, which could, ultimately, improve the health of the communities that depend upon public health agencies to protect their well-being. PMID:23181452

  15. Dynamic molecular confinement in the plasma membrane by microdomains and the cytoskeleton meshwork

    PubMed Central

    Lenne, Pierre-François; Wawrezinieck, Laure; Conchonaud, Fabien; Wurtz, Olivier; Boned, Annie; Guo, Xiao-Jun; Rigneault, Hervé; He, Hai-Tao; Marguet, Didier

    2006-01-01

    It is by now widely recognized that cell membranes show complex patterns of lateral organization. Two mechanisms involving either a lipid-dependent (microdomain model) or cytoskeleton-based (meshwork model) process are thought to be responsible for these plasma membrane organizations. In the present study, fluorescence correlation spectroscopy measurements on various spatial scales were performed in order to directly identify and characterize these two processes in live cells with a high temporal resolution, without any loss of spatial information. Putative raft markers were found to be dynamically compartmented within tens of milliseconds into small microdomains (∅<120 nm) that are sensitive to the cholesterol and sphingomyelin levels, whereas actin-based cytoskeleton barriers are responsible for the confinement of the transferrin receptor protein. A free-like diffusion was observed when both the lipid-dependent and cytoskeleton-based organizations were disrupted, which suggests that these are two main compartmentalizing forces at work in the plasma membrane. PMID:16858413

  16. Method Development for Clinical Comprehensive Evaluation of Pediatric Drugs Based on Multi-Criteria Decision Analysis: Application to Inhaled Corticosteroids for Children with Asthma.

    PubMed

    Yu, Yuncui; Jia, Lulu; Meng, Yao; Hu, Lihua; Liu, Yiwei; Nie, Xiaolu; Zhang, Meng; Zhang, Xuan; Han, Sheng; Peng, Xiaoxia; Wang, Xiaoling

    2018-04-01

    Establishing a comprehensive clinical evaluation system is critical in enacting national drug policy and promoting rational drug use. In China, the 'Clinical Comprehensive Evaluation System for Pediatric Drugs' (CCES-P) project, which aims to compare drugs based on clinical efficacy and cost effectiveness to help decision makers, was recently proposed; therefore, a systematic and objective method is required to guide the process. An evidence-based multi-criteria decision analysis model that involved an analytic hierarchy process (AHP) was developed, consisting of nine steps: (1) select the drugs to be reviewed; (2) establish the evaluation criterion system; (3) determine the criterion weight based on the AHP; (4) construct the evidence body for each drug under evaluation; (5) select comparative measures and calculate the original utility score; (6) place a common utility scale and calculate the standardized utility score; (7) calculate the comprehensive utility score; (8) rank the drugs; and (9) perform a sensitivity analysis. The model was applied to the evaluation of three different inhaled corticosteroids (ICSs) used for asthma management in children (a total of 16 drugs with different dosage forms and strengths or different manufacturers). By applying the drug analysis model, the 16 ICSs under review were successfully scored and evaluated. Budesonide suspension for inhalation (drug ID number: 7) ranked the highest, with comprehensive utility score of 80.23, followed by fluticasone propionate inhaled aerosol (drug ID number: 16), with a score of 79.59, and budesonide inhalation powder (drug ID number: 6), with a score of 78.98. In the sensitivity analysis, the ranking of the top five and lowest five drugs remains unchanged, suggesting this model is generally robust. An evidence-based drug evaluation model based on AHP was successfully developed. The model incorporates sufficient utility and flexibility for aiding the decision-making process, and can be a useful tool for the CCES-P.

  17. Lava channel formation during the 2001 eruption on Mount Etna: evidence for mechanical erosion.

    PubMed

    Ferlito, Carmelo; Siewert, Jens

    2006-01-20

    We report the direct observation of a peculiar lava channel that was formed near the base of a parasitic cone during the 2001 eruption on Mount Etna. Erosive processes by flowing lava are commonly attributed to thermal erosion. However, field evidence strongly suggests that models of thermal erosion cannot explain the formation of this channel. Here, we put forward the idea that the essential erosion mechanism was abrasive wear. By applying a simple model from tribology we demonstrate that the available data agree favorably with our hypothesis. Consequently, we propose that erosional processes resembling the wear phenomena in glacial erosion are possible in a volcanic environment.

  18. Effect of plasma spray processing variations on particle melting and splat spreading of hydroxylapatite and alumina

    NASA Astrophysics Data System (ADS)

    Yankee, S. J.; Pletka, B. J.

    1993-09-01

    Splats of hydroxylapatite (HA) and alumina were obtained via plasma spraying using systematically varied combinations of plasma velocity and temperature, which were achieved by altering the primary plasma gas flow rate and plasma gas composition. Particle size was also varied in the case of alumina. Splat spreading was quantified via computer- aided image analysis as a function of processing variations. A comparison of the predicted splat dimensions from a model developed by Madejski with experimental observations of HA and alumina splats was performed. The model tended to underestimate the HA splat sizes, suggesting that evaporation of smaller particles occurred under the chosen experimental conditions, and to overestimate the observed alumina splat dimensions. Based on this latter result and on the surface appearance of the substrates, incomplete melting appeared to take place in all but the smaller alumina particles. Analysis of the spreading data as a function of the processing variations indicated that the particle size as well as the plasma temperature and velocity influenced the extent of particle melting. Based on these data and other considerations, a physical model was developed that described the degree of particle melting in terms of material and processing parameters. The physical model correctly predicted the relative splat spreading behavior of HA and alumina, assuming that spreading was directly linked to the extent of particle melting.

  19. Examining the Relationships Between Education, Social Networks and Democratic Support With ABM

    NASA Technical Reports Server (NTRS)

    Drucker, Nick; Campbell, Kenyth

    2011-01-01

    This paper introduces an agent-based model that explores the relationships between education, social networks, and support for democratic ideals. This study examines two factors thai affect democratic support, education, and social networks. Current theory concerning these two variables suggests that positive relationships exist between education and democratic support and between social networks and the spread of ideas. The model contains multiple variables of democratic support, two of which are evaluated through experimentation. The model allows individual entities within the system to make "decisions" about their democratic support independent of one another. The agent based approach also allows entities to utilize their social networks to spread ideas. Current theory supports experimentation results. In addion , these results show the model is capable of reproducing real world outcomes. This paper addresses the model creation process and the experimentation procedure, as well as future research avenues and potential shortcomings of the model

  20. Modeling plankton ecosystem functioning and nitrogen fluxes in the oligotrophic waters of the Beaufort Sea, Arctic Ocean: a focus on light-driven processes

    NASA Astrophysics Data System (ADS)

    Le Fouest, V.; Zakardjian, B.; Xie, H.; Raimbault, P.; Joux, F.; Babin, M.

    2013-07-01

    The Arctic Ocean (AO) undergoes profound changes of its physical and biotic environments due to climate change. In some areas of the Beaufort Sea, the stronger haline stratification observed in summer alters the plankton ecosystem structure, functioning and productivity, promoting oligotrophy. A one-dimension (1-D) physical-biological coupled model based on the large multiparametric database of the Malina project in the Beaufort Sea was used (i) to infer the plankton ecosystem functioning and related nitrogen fluxes and (ii) to assess the model sensitivity to key light-driven processes involved in nutrient recycling and phytoplankton growth. The coupled model suggested that ammonium photochemically produced from photosensitive dissolved organic nitrogen (i.e., photoammonification process) was a necessary nitrogen source to achieve the observed levels of microbial biomass and production. Photoammonification directly and indirectly (by stimulating the microbial food web activity) contributed to 70% and 18.5% of the 0-10 m and whole water column, respectively, simulated primary production (respectively 66% and 16% for the bacterial production). The model also suggested that variable carbon to chlorophyll ratios were required to simulate the observed herbivorous versus microbial food web competition and realistic nitrogen fluxes in the Beaufort Sea oligotrophic waters. In face of accelerating Arctic warming, more attention should be paid in the future to the mechanistic processes involved in food webs and functional group competition, nutrient recycling and primary production in poorly productive waters of the AO, as they are expected to expand rapidly.

  1. High-throughput micro-scale cultivations and chromatography modeling: Powerful tools for integrated process development.

    PubMed

    Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen

    2015-10-01

    Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.

  2. Crop monitoring & yield forecasting system based on Synthetic Aperture Radar (SAR) and process-based crop growth model: Development and validation in South and South East Asian Countries

    NASA Astrophysics Data System (ADS)

    Setiyono, T. D.

    2014-12-01

    Accurate and timely information on rice crop growth and yield helps governments and other stakeholders adapting their economic policies and enables relief organizations to better anticipate and coordinate relief efforts in the wake of a natural catastrophe. Such delivery of rice growth and yield information is made possible by regular earth observation using space-born Synthetic Aperture Radar (SAR) technology combined with crop modeling approach to estimate yield. Radar-based remote sensing is capable of observing rice vegetation growth irrespective of cloud coverage, an important feature given that in incidences of flooding the sky is often cloud-covered. The system allows rapid damage assessment over the area of interest. Rice yield monitoring is based on a crop growth simulation and SAR-derived key information, particularly start of season and leaf growth rate. Results from pilot study sites in South and South East Asian countries suggest that incorporation of SAR data into crop model improves yield estimation for actual yields. Remote-sensing data assimilation into crop model effectively capture responses of rice crops to environmental conditions over large spatial coverage, which otherwise is practically impossible to achieve. Such improvement of actual yield estimates offers practical application such as in a crop insurance program. Process-based crop simulation model is used in the system to ensure climate information is adequately captured and to enable mid-season yield forecast.

  3. Craving's place in addiction theory: contributions of the major models.

    PubMed

    Skinner, Marilyn D; Aubin, Henri-Jean

    2010-03-01

    We examine in this paper the unfolding of craving concepts within 18 models that span roughly 60 years (1948-2009). The amassed evidence suggests that craving is an indispensable construct, useful as a research area because it has continued to destabilize patients seeking treatment for substances. The models fall into four categories: the conditioning-based models, the cognitive models, the psychobiological models, and the motivation models. In the conditioning models, craving is assumed to be an automatic, unconscious reaction to a stimulus. In the cognitive models, craving arises from the operation of information processing systems. In the psychobiological models, craving can be explained at least in part by biological factors with an emphasis on motivational components. Finally, in the motivation models, craving is viewed as a component of a larger decision-making framework. It is well accepted that no single model explains craving completely, suggesting that a solid understanding of the phenomenon will only occur with consideration from multiple angles. A reformulated definition of craving is proposed. (c) 2009 Elsevier Ltd. All rights reserved.

  4. Big data learning and suggestions in modern apps

    NASA Astrophysics Data System (ADS)

    Sharma, G.; Nadesh, R. K.; ArivuSelvan, K.

    2017-11-01

    Among many other tasks involved for emergent location-based applications such as those involved in prescribing touring places and those focused on publicizing based on destination, destination prediction is vital. Dealing with destination prediction involves determining the probability of a location (destination) depending on historical trajectories. In this paper, a destination prediction based on probabilistic model (Machine Learning Model) feed-forward neural networks will be presented, which will work by making the observation of driver’s habits. Some individuals drive to same locations such as work involving same route every day of the working week. Here, streaming of real-time driving data will be sent through Kafka queue in apache storm for real-time processing and finally storing the data in MongoDB.

  5. Community-based Participatory Research

    PubMed Central

    Holkup, Patricia A.; Tripp-Reimer, Toni; Salois, Emily Matt; Weinert, Clarann

    2009-01-01

    Community-based participatory research (CBPR), with its emphasis on joining with the community as full and equal partners in all phases of the research process, makes it an appealing model for research with vulnerable populations. However, the CBPR approach is not without special challenges relating to ethical, cultural, and scientific issues. In this article, we describe how we managed the challenges we encountered while conducting a CBPR project with a Native American community. We also suggest criteria that will enable evaluation of the project. PMID:15455579

  6. Recurrent V1-V2 interaction in early visual boundary processing.

    PubMed

    Neumann, H; Sepp, W

    1999-11-01

    A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours.

  7. Nitrogen deposition and its effect on carbon storage in Chinese forests during 1981-2010

    NASA Astrophysics Data System (ADS)

    Gu, Fengxue; Zhang, Yuandong; Huang, Mei; Tao, Bo; Yan, Huimin; Guo, Rui; Li, Jie

    2015-12-01

    Human activities have resulted in dramatically increased nitrogen (N) deposition worldwide, which is closely linked to the carbon (C)-cycle processes and is considered to facilitate terrestrial C sinks. In this study, we firstly estimated the spatial and temporal variations of N deposition during 1981-2010 based on a new algorithm; then we used a newly improved process-based ecosystem model, CEVSA2, to examine the effects of N deposition on C storage in Chinese forests. The results show that the rate of N deposition increased by 0.058 g N m-2 yr-1 between 1981 and 2010. The N deposition rate in 2010 was 2.32 g N m-2 yr-1, representing a large spatial variation from 0 to 0.25 g N m-2 yr-1 on the northwestern Qinghai-Tibet Plateau to over 4.5 g N m-2 yr-1 in the southeastern China. The model simulations suggest that N deposition induced a 4.78% increase in the total C storage in Chinese forests, most of which accumulated in vegetation. C storage increased together with the increase in N deposition, in both space and time. However, N use efficiency was highest when N deposition was 0.4-1.0 g N m-2 yr-1. We suggest conducting more manipulation experiments and observations in different vegetation types, which will be greatly helpful to incorporate additional processes and mechanisms into the ecosystem modeling. Further development of ecosystem models and identification of C-N interactions will be important for determining the effects of N input on C cycles on both regional and global scales.

  8. Modeling nutrient retention at the watershed scale: Does small stream research apply to the whole river network?

    NASA Astrophysics Data System (ADS)

    Aguilera, Rosana; Marcé, Rafael; Sabater, Sergi

    2013-06-01

    are conveyed from terrestrial and upstream sources through drainage networks. Streams and rivers contribute to regulate the material exported downstream by means of transformation, storage, and removal of nutrients. It has been recently suggested that the efficiency of process rates relative to available nutrient concentration in streams eventually declines, following an efficiency loss (EL) dynamics. However, most of these predictions are based at the reach scale in pristine streams, failing to describe the role of entire river networks. Models provide the means to study nutrient cycling from the stream network perspective via upscaling to the watershed the key mechanisms occurring at the reach scale. We applied a hybrid process-based and statistical model (SPARROW, Spatially Referenced Regression on Watershed Attributes) as a heuristic approach to describe in-stream nutrient processes in a highly impaired, high stream order watershed (the Llobregat River Basin, NE Spain). The in-stream decay specifications of the model were modified to include a partial saturation effect in uptake efficiency (expressed as a power law) and better capture biological nutrient retention in river systems under high anthropogenic stress. The stream decay coefficients were statistically significant in both nitrate and phosphate models, indicating the potential role of in-stream processing in limiting nutrient export. However, the EL concept did not reliably describe the patterns of nutrient uptake efficiency for the concentration gradient and streamflow values found in the Llobregat River basin, posing in doubt its complete applicability to explain nutrient retention processes in stream networks comprising highly impaired rivers.

  9. Dataset of surface plasmon resonance based on photonic crystal fiber for chemical sensing applications.

    PubMed

    Khalek, Md Abdul; Chakma, Sujan; Paul, Bikash Kumar; Ahmed, Kawsar

    2018-08-01

    In this research work a perfectly circular lattice Photonic Crystal Fiber (PCF) based surface Plasmon resonance (SPR) based sensor has been proposed. The investigation process has been successfully carried out using finite element method (FEM) based commercial available software package COMSOL Multiphysics version 4.2. The whole investigation module covers the wider optical spectrum ranging from 0.48 µm to 1.10 µm. Using the wavelength interrogation method the proposed model exposed maximum sensitivity of 9000 nm/RIU(Refractive Index Unit) and using the amplitude interrogation method it obtained maximum sensitivity of 318 RIU -1 . Moreover the maximum sensor resolution of 1.11×10 -5 in the sensing ranges between 1.34 and 1.37. Based on the suggested sensor model may provide great impact in biological area such as bio-imaging.

  10. Evidence integration in model-based tree search

    PubMed Central

    Solway, Alec; Botvinick, Matthew M.

    2015-01-01

    Research on the dynamics of reward-based, goal-directed decision making has largely focused on simple choice, where participants decide among a set of unitary, mutually exclusive options. Recent work suggests that the deliberation process underlying simple choice can be understood in terms of evidence integration: Noisy evidence in favor of each option accrues over time, until the evidence in favor of one option is significantly greater than the rest. However, real-life decisions often involve not one, but several steps of action, requiring a consideration of cumulative rewards and a sensitivity to recursive decision structure. We present results from two experiments that leveraged techniques previously applied to simple choice to shed light on the deliberation process underlying multistep choice. We interpret the results from these experiments in terms of a new computational model, which extends the evidence accumulation perspective to multiple steps of action. PMID:26324932

  11. Cavitation erosion - scale effect and model investigations

    NASA Astrophysics Data System (ADS)

    Geiger, F.; Rutschmann, P.

    2015-12-01

    The experimental works presented in here contribute to the clarification of erosive effects of hydrodynamic cavitation. Comprehensive cavitation erosion test series were conducted for transient cloud cavitation in the shear layer of prismatic bodies. The erosion pattern and erosion rates were determined with a mineral based volume loss technique and with a metal based pit count system competitively. The results clarified the underlying scale effects and revealed a strong non-linear material dependency, which indicated significantly different damage processes for both material types. Furthermore, the size and dynamics of the cavitation clouds have been assessed by optical detection. The fluctuations of the cloud sizes showed a maximum value for those cavitation numbers related to maximum erosive aggressiveness. The finding suggests the suitability of a model approach which relates the erosion process to cavitation cloud dynamics. An enhanced experimental setup is projected to further clarify these issues.

  12. Response to Intervention: Preventing and Remediating Academic Difficulties

    PubMed Central

    Fletcher, Jack M.; Vaughn, Sharon

    2009-01-01

    We address the advantages and challenges of service delivery models based on student response to intervention (RTI) for preventing and remediating academic difficulties and as data sources for identification for special education services. The primary goal of RTI models is improved academic and behavioral outcomes for all students. We review evidence for the processes underlying RTI, including screening and progress monitoring assessments, evidence-based interventions, and schoolwide coordination of multitiered instruction. We also discuss the secondary goal of RTI, which is to provide data for identification of learning disabilities (LDs). Incorporating instructional response into identification represents a controversial shift away from discrepancies in cognitive skills that have traditionally been a primary basis for LD identification. RTI processes potentially integrate general and special education and suggest new directions for research and public policy related to LDs, but the scaling issues in schools are significant and more research is needed on the use of RTI data for identification. PMID:21765862

  13. Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.

    PubMed

    Kärkkäinen, Salme; Lantuéjoul, Christian

    2007-10-01

    We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.

  14. BAIAP2 is related to emotional modulation of human memory strength.

    PubMed

    Luksys, Gediminas; Ackermann, Sandra; Coynel, David; Fastenrath, Matthias; Gschwind, Leo; Heck, Angela; Rasch, Bjoern; Spalek, Klara; Vogler, Christian; Papassotiropoulos, Andreas; de Quervain, Dominique

    2014-01-01

    Memory performance is the result of many distinct mental processes, such as memory encoding, forgetting, and modulation of memory strength by emotional arousal. These processes, which are subserved by partly distinct molecular profiles, are not always amenable to direct observation. Therefore, computational models can be used to make inferences about specific mental processes and to study their genetic underpinnings. Here we combined a computational model-based analysis of memory-related processes with high density genetic information derived from a genome-wide study in healthy young adults. After identifying the best-fitting model for a verbal memory task and estimating the best-fitting individual cognitive parameters, we found a common variant in the gene encoding the brain-specific angiogenesis inhibitor 1-associated protein 2 (BAIAP2) that was related to the model parameter reflecting modulation of verbal memory strength by negative valence. We also observed an association between the same genetic variant and a similar emotional modulation phenotype in a different population performing a picture memory task. Furthermore, using functional neuroimaging we found robust genotype-dependent differences in activity of the parahippocampal cortex that were specifically related to successful memory encoding of negative versus neutral information. Finally, we analyzed cortical gene expression data of 193 deceased subjects and detected significant BAIAP2 genotype-dependent differences in BAIAP2 mRNA levels. Our findings suggest that model-based dissociation of specific cognitive parameters can improve the understanding of genetic underpinnings of human learning and memory.

  15. A multi-objective model for closed-loop supply chain optimization and efficient supplier selection in a competitive environment considering quantity discount policy

    NASA Astrophysics Data System (ADS)

    Jahangoshai Rezaee, Mustafa; Yousefi, Samuel; Hayati, Jamileh

    2017-06-01

    Supplier selection and allocation of optimal order quantity are two of the most important processes in closed-loop supply chain (CLSC) and reverse logistic (RL). So that providing high quality raw material is considered as a basic requirement for a manufacturer to produce popular products, as well as achieve more market shares. On the other hand, considering the existence of competitive environment, suppliers have to offer customers incentives like discounts and enhance the quality of their products in a competition with other manufacturers. Therefore, in this study, a model is presented for CLSC optimization, efficient supplier selection, as well as orders allocation considering quantity discount policy. It is modeled using multi-objective programming based on the integrated simultaneous data envelopment analysis-Nash bargaining game. In this study, maximizing profit and efficiency and minimizing defective and functions of delivery delay rate are taken into accounts. Beside supplier selection, the suggested model selects refurbishing sites, as well as determining the number of products and parts in each network's sector. The suggested model's solution is carried out using global criteria method. Furthermore, based on related studies, a numerical example is examined to validate it.

  16. A network-based approach for resistance transmission in bacterial populations.

    PubMed

    Gehring, Ronette; Schumm, Phillip; Youssef, Mina; Scoglio, Caterina

    2010-01-07

    Horizontal transfer of mobile genetic elements (conjugation) is an important mechanism whereby resistance is spread through bacterial populations. The aim of our work is to develop a mathematical model that quantitatively describes this process, and to use this model to optimize antimicrobial dosage regimens to minimize resistance development. The bacterial population is conceptualized as a compartmental mathematical model to describe changes in susceptible, resistant, and transconjugant bacteria over time. This model is combined with a compartmental pharmacokinetic model to explore the effect of different plasma drug concentration profiles. An agent-based simulation tool is used to account for resistance transfer occurring when two bacteria are adjacent or in close proximity. In addition, a non-linear programming optimal control problem is introduced to minimize bacterial populations as well as the drug dose. Simulation and optimization results suggest that the rapid death of susceptible individuals in the population is pivotal in minimizing the number of transconjugants in a population. This supports the use of potent antimicrobials that rapidly kill susceptible individuals and development of dosage regimens that maintain effective antimicrobial drug concentrations for as long as needed to kill off the susceptible population. Suggestions are made for experiments to test the hypotheses generated by these simulations.

  17. Working memory differences in long-distance dependency resolution

    PubMed Central

    Nicenboim, Bruno; Vasishth, Shravan; Gattei, Carolina; Sigman, Mariano; Kliegl, Reinhold

    2015-01-01

    There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects; these are usually associated with constraints in working memory (DLT: Gibson, 2000; activation-based model: Lewis and Vasishth, 2005). In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013). Antilocality effects can be explained by the expectation-based approach as proposed by Levy (2008) or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005). We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory-based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i) antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii) increasing dependency length by interposing material that modifies the head of the dependency (the verb) produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii) a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts) or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002). In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component. PMID:25852623

  18. Working memory differences in long-distance dependency resolution.

    PubMed

    Nicenboim, Bruno; Vasishth, Shravan; Gattei, Carolina; Sigman, Mariano; Kliegl, Reinhold

    2015-01-01

    There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects; these are usually associated with constraints in working memory (DLT: Gibson, 2000; activation-based model: Lewis and Vasishth, 2005). In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013). Antilocality effects can be explained by the expectation-based approach as proposed by Levy (2008) or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005). We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory-based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i) antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii) increasing dependency length by interposing material that modifies the head of the dependency (the verb) produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii) a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts) or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002). In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component.

  19. Methods to Improve the Selection and Tailoring of Implementation Strategies

    PubMed Central

    Powell, Byron J.; Beidas, Rinad S.; Lewis, Cara C.; Aarons, Gregory A.; McMillen, J. Curtis; Proctor, Enola K.; Khinduka, Shanti K.; Mandell, David S.

    2015-01-01

    Implementing behavioral health interventions is a complicated process. It has been suggested that implementation strategies should be selected and tailored to address the contextual needs of a given change effort; however, there is limited guidance as to how to do this. This article proposes four methods (concept mapping, group model building, conjoint analysis, and intervention mapping) that could be used to match implementation strategies to identified barriers and facilitators for a particular evidence-based practice or process change being implemented in a given setting. Each method is reviewed, examples of their use are provided, and their strengths and weaknesses are discussed. The discussion includes suggestions for future research pertaining to implementation strategies and highlights these methods' relevance to behavioral health services and research. PMID:26289563

  20. ProvenCare perinatal: a model for delivering evidence/ guideline-based care for perinatal populations.

    PubMed

    Berry, Scott A; Laam, Leslie A; Wary, Andrea A; Mateer, Harry O; Cassagnol, Hans P; McKinley, Karen E; Nolan, Ruth A

    2011-05-01

    Geisinger Health System (GHS) has applied its ProvenCare model to demonstrate that a large integrated health care delivery system, enabled by an electronic health record (EHR), could reengineer a complicated clinical process, reduce unwarranted variation, and provide evidence-based care for patients with a specified clinical condition. In 2007 GHS began to apply the model to a more complicated, longer-term condition of "wellness"--perinatal care. ADAPTING PROVENCARE TO PERINATAL CARE: The ProvenCare Perinatal initiative was more complex than the five previous ProvenCare endeavors in terms of breadth, scope, and duration. Each of the 22 sites created a process flow map to depict the current, real-time process at each location. The local practice site providers-physicians and mid-level practitioners-reached consensus on 103 unique best practice measures (BPMs), which would be tracked for every patient. These maps were then used to create a single standardized pathway that included the BPMs but also preserved some unique care offerings that reflected the needs of the local context. A nine-phase methodology, expanded from the previous six-phase model, was implemented on schedule. Pre- to postimplementation improvement occurred for all seven BPMs or BPM bundles that were considered the most clinically relevant, with five statistically significant. In addition, the rate of primary cesarean sections decreased by 32%, and birth trauma remained unchanged as the number of vaginal births increased. Preliminary experience suggests that integrating evidence/guideline-based best practices into work flows in inpatient and outpatient settings can achieve improvements in daily patient care processes and outcomes.

  1. Shared decision-making in mental health care-A user perspective on decisional needs in community-based services.

    PubMed

    Grim, Katarina; Rosenberg, David; Svedberg, Petra; Schön, Ulla-Karin

    2016-01-01

    Shared decision-making (SDM) is an emergent research topic in the field of mental health care and is considered to be a central component of a recovery-oriented system. Despite the evidence suggesting the benefits of this change in the power relationship between users and practitioners, the method has not been widely implemented in clinical practice. The objective of this study was to investigate decisional and information needs among users with mental illness as a prerequisite for the development of a decision support tool aimed at supporting SDM in community-based mental health services in Sweden. Three semi-structured focus group interviews were conducted with 22 adult users with mental illness. The transcribed interviews were analyzed using a directed content analysis. This method was used to develop an in-depth understanding of the decisional process as well as to validate and conceptually extend Elwyn et al.'s model of SDM. The model Elwyn et al. have created for SDM in somatic care fits well for mental health services, both in terms of process and content. However, the results also suggest an extension of the model because decisions related to mental illness are often complex and involve a number of life domains. Issues related to social context and individual recovery point to the need for a preparation phase focused on establishing cooperation and mutual understanding as well as a clear follow-up phase that allows for feedback and adjustments to the decision-making process. The current study contributes to a deeper understanding of decisional and information needs among users of community-based mental health services that may reduce barriers to participation in decision-making. The results also shed light on attitudinal, relationship-based, and cognitive factors that are important to consider in adapting SDM in the mental health system.

  2. Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    PubMed Central

    Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610

  3. Linking Nurse Leadership and Work Characteristics to Nurse Burnout and Engagement.

    PubMed

    Lewis, Heather Smith; Cunningham, Christopher J L

    2016-01-01

    Burnout and engagement are critical conditions affecting patient safety and the functioning of healthcare organizations; the areas of worklife model suggest that work environment characteristics may impact employee burnout and general worklife quality. The purpose was to present and test a conditional process model linking perceived transformational nurse leadership to nurse staff burnout and engagement via important work environment characteristics. Working nurses (N = 120) provided perceptions of the core study variables via Internet- or paper-based survey. The hypothesized model was tested using the PROCESS analysis tool, which enables simultaneous testing of multiple, parallel, indirect effects within the SPSS statistical package. Findings support the areas of worklife model and suggest that transformational leadership is strongly associated with work environment characteristics that are further linked to nurse burnout and engagement. Interestingly, different work characteristics appear to be critical channels through which transformational leadership impacts nurse burnout and engagement. There are several methodological and practical implications of this work for researchers and practitioners interested in preventing burnout and promoting occupational health within healthcare organizations. These implications are tied to the connections observed between transformational leadership, specific work environment characteristics, and burnout and engagement outcomes.

  4. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    PubMed

    Keck, Christian; Savin, Cristina; Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  5. Bone modeling and remodeling: potential as therapeutic targets for the treatment of osteoporosis.

    PubMed

    Langdahl, Bente; Ferrari, Serge; Dempster, David W

    2016-12-01

    The adult skeleton is renewed by remodeling throughout life. Bone remodeling is a process where osteoclasts and osteoblasts work sequentially in the same bone remodeling unit. After the attainment of peak bone mass, bone remodeling is balanced and bone mass is stable for one or two decades until age-related bone loss begins. Age-related bone loss is caused by increases in resorptive activity and reduced bone formation. The relative importance of cortical remodeling increases with age as cancellous bone is lost and remodeling activity in both compartments increases. Bone modeling describes the process whereby bones are shaped or reshaped by the independent action of osteoblast and osteoclasts. The activities of osteoblasts and osteoclasts are not necessarily coupled anatomically or temporally. Bone modeling defines skeletal development and growth but continues throughout life. Modeling-based bone formation contributes to the periosteal expansion, just as remodeling-based resorption is responsible for the medullary expansion seen at the long bones with aging. Existing and upcoming treatments affect remodeling as well as modeling. Teriparatide stimulates bone formation, 70% of which is remodeling based and 20-30% is modeling based. The vast majority of modeling represents overflow from remodeling units rather than de novo modeling. Denosumab inhibits bone remodeling but is permissive for modeling at cortex. Odanacatib inhibits bone resorption by inhibiting cathepsin K activity, whereas modeling-based bone formation is stimulated at periosteal surfaces. Inhibition of sclerostin stimulates bone formation and histomorphometric analysis demonstrated that bone formation is predominantly modeling based. The bone-mass response to some osteoporosis treatments in humans certainly suggests that nonremodeling mechanisms contribute to this response and bone modeling may be such a mechanism. To date, this has only been demonstrated for teriparatide, however, it is clear that rediscovering a phenomenon that was first observed more half a century ago will have an important impact on our understanding of how new antifracture treatments work.

  6. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice.

    PubMed

    Towal, R Blythe; Mormann, Milica; Koch, Christof

    2013-10-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.

  7. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice

    PubMed Central

    Towal, R. Blythe; Mormann, Milica; Koch, Christof

    2013-01-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift–diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. PMID:24019496

  8. A modified artificial neural network based prediction technique for tropospheric radio refractivity

    PubMed Central

    Javeed, Shumaila; Javed, Wajahat; Atif, M.; Uddin, Mueen

    2018-01-01

    Radio refractivity plays a significant role in the development and design of radio systems for attaining the best level of performance. Refractivity in the troposphere is one of the features affecting electromagnetic waves, and hence the communication system interrupts. In this work, a modified artificial neural network (ANN) based model is applied to predict the refractivity. The suggested ANN model comprises three modules: the data preparation module, the feature selection module, and the forecast module. The first module applies pre-processing to make the data compatible for the feature selection module. The second module discards irrelevant and redundant data from the input set. The third module uses ANN for prediction. The ANN model applies a sigmoid activation function and a multi-variate auto regressive model to update the weights during the training process. In this work, the refractivity is predicted and estimated based on ten years (2002–2011) of meteorological data, such as the temperature, pressure, and humidity, obtained from the Pakistan Meteorological Department (PMD), Islamabad. The refractivity is estimated using the method suggested by the International Telecommunication Union (ITU). The refractivity is predicted for the year 2012 using the database of the previous ten years, with the help of ANN. The ANN model is implemented in MATLAB. Next, the estimated and predicted refractivity levels are validated against each other. The predicted and actual values (PMD data) of the atmospheric parameters agree with each other well, and demonstrate the accuracy of the proposed ANN method. It was further found that all parameters have a strong relationship with refractivity, in particular the temperature and humidity. The refractivity values are higher during the rainy season owing to a strong association with the relative humidity. Therefore, it is important to properly cater the signal communication system during hot and humid weather. Based on the results, the proposed ANN method can be used to develop a refractivity database, which is highly important in a radio communication system. PMID:29494609

  9. An object-oriented description method of EPMM process

    NASA Astrophysics Data System (ADS)

    Jiang, Zuo; Yang, Fan

    2017-06-01

    In order to use the object-oriented mature tools and language in software process model, make the software process model more accord with the industrial standard, it’s necessary to study the object-oriented modelling of software process. Based on the formal process definition in EPMM, considering the characteristics that Petri net is mainly formal modelling tool and combining the Petri net modelling with the object-oriented modelling idea, this paper provides this implementation method to convert EPMM based on Petri net into object models based on object-oriented description.

  10. Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Morita, Kazuki

    2018-03-01

    We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.

  11. Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Morita, Kazuki

    2018-06-01

    We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.

  12. Similarity-Dissimilarity Competition in Disjunctive Classification Tasks

    PubMed Central

    Mathy, Fabien; Haladjian, Harry H.; Laurent, Eric; Goldstone, Robert L.

    2013-01-01

    Typical disjunctive artificial classification tasks require participants to sort stimuli according to rules such as “x likes cars only when black and coupe OR white and SUV.” For categories like this, increasing the salience of the diagnostic dimensions has two simultaneous effects: increasing the distance between members of the same category and increasing the distance between members of opposite categories. Potentially, these two effects respectively hinder and facilitate classification learning, leading to competing predictions for learning. Increasing saliency may lead to members of the same category to be considered less similar, while the members of separate categories might be considered more dissimilar. This implies a similarity-dissimilarity competition between two basic classification processes. When focusing on sub-category similarity, one would expect more difficult classification when members of the same category become less similar (disregarding the increase of between-category dissimilarity); however, the between-category dissimilarity increase predicts a less difficult classification. Our categorization study suggests that participants rely more on using dissimilarities between opposite categories than finding similarities between sub-categories. We connect our results to rule- and exemplar-based classification models. The pattern of influences of within- and between-category similarities are challenging for simple single-process categorization systems based on rules or exemplars. Instead, our results suggest that either these processes should be integrated in a hybrid model, or that category learning operates by forming clusters within each category. PMID:23403979

  13. Epistatic role of base excision repair and mismatch repair pathways in mediating cisplatin cytotoxicity

    PubMed Central

    Kothandapani, Anbarasi; Sawant, Akshada; Dangeti, Venkata Srinivas Mohan Nimai; Sobol, Robert W.; Patrick, Steve M.

    2013-01-01

    Base excision repair (BER) and mismatch repair (MMR) pathways play an important role in modulating cis-Diamminedichloroplatinum (II) (cisplatin) cytotoxicity. In this article, we identified a novel mechanistic role of both BER and MMR pathways in mediating cellular responses to cisplatin treatment. Cells defective in BER or MMR display a cisplatin-resistant phenotype. Targeting both BER and MMR pathways resulted in no additional resistance to cisplatin, suggesting that BER and MMR play epistatic roles in mediating cisplatin cytotoxicity. Using a DNA Polymerase β (Polβ) variant deficient in polymerase activity (D256A), we demonstrate that MMR acts downstream of BER and is dependent on the polymerase activity of Polβ in mediating cisplatin cytotoxicity. MSH2 preferentially binds a cisplatin interstrand cross-link (ICL) DNA substrate containing a mismatch compared with a cisplatin ICL substrate without a mismatch, suggesting a novel mutagenic role of Polβ in activating MMR in response to cisplatin. Collectively, these results provide the first mechanistic model for BER and MMR functioning within the same pathway to mediate cisplatin sensitivity via non-productive ICL processing. In this model, MMR participation in non-productive cisplatin ICL processing is downstream of BER processing and dependent on Polβ misincorporation at cisplatin ICL sites, which results in persistent cisplatin ICLs and sensitivity to cisplatin. PMID:23761438

  14. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  15. A detailed model for simulation of catchment scale subsurface hydrologic processes

    NASA Technical Reports Server (NTRS)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  16. Evidence-informed policy formulation and implementation: a comparative case study of two national policies for improving health and social care in Sweden.

    PubMed

    Strehlenert, H; Richter-Sundberg, L; Nyström, M E; Hasson, H

    2015-12-08

    Evidence has come to play a central role in health policymaking. However, policymakers tend to use other types of information besides research evidence. Most prior studies on evidence-informed policy have focused on the policy formulation phase without a systematic analysis of its implementation. It has been suggested that in order to fully understand the policy process, the analysis should include both policy formulation and implementation. The purpose of the study was to explore and compare two policies aiming to improve health and social care in Sweden and to empirically test a new conceptual model for evidence-informed policy formulation and implementation. Two concurrent national policies were studied during the entire policy process using a longitudinal, comparative case study approach. Data was collected through interviews, observations, and documents. A Conceptual Model for Evidence-Informed Policy Formulation and Implementation was developed based on prior frameworks for evidence-informed policymaking and policy dissemination and implementation. The conceptual model was used to organize and analyze the data. The policies differed regarding the use of evidence in the policy formulation and the extent to which the policy formulation and implementation phases overlapped. Similarities between the cases were an emphasis on capacity assessment, modified activities based on the assessment, and a highly active implementation approach relying on networks of stakeholders. The Conceptual Model for Evidence-Informed Policy Formulation and Implementation was empirically useful to organize the data. The policy actors' roles and functions were found to have a great influence on the choices of strategies and collaborators in all policy phases. The Conceptual Model for Evidence-Informed Policy Formulation and Implementation was found to be useful. However, it provided insufficient guidance for analyzing actors involved in the policy process, capacity-building strategies, and overlapping policy phases. A revised version of the model that includes these aspects is suggested.

  17. Patterns of Alloy Deformation by Pulsed Pressure

    NASA Astrophysics Data System (ADS)

    Chebotnyagin, L. M.; Potapov, V. V.; Lopatin, V. V.

    2015-06-01

    Patterns of alloy deformation for optimization of a welding regime are studied by the method of modeling and deformation profiles providing high deformation quality are determined. A model of stepwise kinetics of the alloy deformation by pulsed pressure from the expanding plasma channel inside of a deformable cylinder is suggested. The model is based on the analogy between the acoustic and electromagnetic wave processes in long lines. The shock wave pattern of alloy deformation in the presence of multiple reflections of pulsed pressure waves in the gap plasma channel - cylinder wall and the influence of unloading waves from free surfaces are confirmed.

  18. Method of Harmonic Balance in Full-Scale-Model Tests of Electrical Devices

    NASA Astrophysics Data System (ADS)

    Gorbatenko, N. I.; Lankin, A. M.; Lankin, M. V.

    2017-01-01

    Methods for determining the weber-ampere characteristics of electrical devices, one of which is based on solution of direct problem of harmonic balance and the other on solution of inverse problem of harmonic balance by the method of full-scale-model tests, are suggested. The mathematical model of the device is constructed using the describing function and simplex optimization methods. The presented results of experimental applications of the method show its efficiency. The advantage of the method is the possibility of application for nondestructive inspection of electrical devices in the processes of their production and operation.

  19. Mathematical Modeling the Geometric Regularity in Proteus Mirabilis Colonies

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Jiang, Yi; Minsu Kim Collaboration

    Proteus Mirabilis colony exhibits striking spatiotemporal regularity, with concentric ring patterns with alternative high and low bacteria density in space, and periodicity for repetition process of growth and swarm in time. We present a simple mathematical model to explain the spatiotemporal regularity of P. Mirabilis colonies. We study a one-dimensional system. Using a reaction-diffusion model with thresholds in cell density and nutrient concentration, we recreated periodic growth and spread patterns, suggesting that the nutrient constraint and cell density regulation might be sufficient to explain the spatiotemporal periodicity in P. Mirabilis colonies. We further verify this result using a cell based model.

  20. Process spectroscopy in microemulsions—Raman spectroscopy for online monitoring of a homogeneous hydroformylation process

    NASA Astrophysics Data System (ADS)

    Paul, Andrea; Meyer, Klas; Ruiken, Jan-Paul; Illner, Markus; Müller, David-Nicolas; Esche, Erik; Wozny, Günther; Westad, Frank; Maiwald, Michael

    2017-03-01

    A major industrial reaction based on homogeneous catalysis is hydroformylation for the production of aldehydes from alkenes and syngas. Hydroformylation in microemulsions, which is currently under investigation at Technische Universität Berlin on a mini-plant scale, was identified as a cost efficient approach which also enhances product selectivity. Herein, we present the application of online Raman spectroscopy on the reaction of 1-dodecene to 1-tridecanal within a microemulsion. To achieve a good representation of the operation range in the mini-plant with regard to concentrations of the reactants a design of experiments was used. Based on initial Raman spectra partial least squares regression (PLSR) models were calibrated for the prediction of 1-dodecene and 1-tridecanal. Limits of predictions arise from nonlinear correlations between Raman intensity and mass fractions of compounds in the microemulsion system. Furthermore, the prediction power of PLSR models becomes limited due to unexpected by-product formation. Application of the lab-scale derived calibration spectra and PLSR models on online spectra from a mini-plant operation yielded promising estimations of 1-tridecanal and acceptable predictions of 1-dodecene mass fractions suggesting Raman spectroscopy as a suitable technique for process analytics in microemulsions.

  1. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

    PubMed

    Woodward, Alexander; Froese, Tom; Ikegami, Takashi

    2015-02-01

    The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions

    PubMed Central

    Altieri, Nicholas; Pisoni, David B.; Townsend, James T.

    2012-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081

  3. Simulation of talking faces in the human brain improves auditory speech recognition

    PubMed Central

    von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.

    2008-01-01

    Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648

  4. Some behavioral and neurobiological constraints on theories of audiovisual speech integration: a review and suggestions for new directions.

    PubMed

    Altieri, Nicholas; Pisoni, David B; Townsend, James T

    2011-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration.

  5. Estimating the Relevance of World Disturbances to Explain Savings, Interference and Long-Term Motor Adaptation Effects

    PubMed Central

    Berniker, Max; Kording, Konrad P.

    2011-01-01

    Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters. PMID:21998574

  6. A reward-centred model of anorexia nervosa: a focussed narrative review of the neurological and psychophysiological literature.

    PubMed

    O'Hara, Caitlin B; Campbell, Iain C; Schmidt, Ulrike

    2015-05-01

    This focussed narrative review examines neurobiological and psychophysiological evidence supporting a role for altered reward processes in the development and maintenance of anorexia nervosa (AN). In AN, there does not appear to be a generalised inability to experience reward. Rather, data suggest that a reluctance to gain weight leads to an aversive appraisal of food- and taste-related stimuli. As a result, cues compatible with this aberrant mode of thinking become rewarding for the individual. Evidence also suggests that attribution of motivational salience to such cues promotes anorectic behaviours. These findings are consistent with models in which interactions between cognition and reward are important in eliciting the anorectic "habit". A model is proposed which is consistent with elements of other theoretical frameworks, but differs in that its emphasis is towards neural overlaps between AN and addiction. It is consistent with AN being a reward-based learned behaviour in which aberrant cognitions related to eating and shape alter functioning of central reward systems. It proposes that the primary neural problem responsible for the development, maintenance, and treatment resistance is centred in the striatal reward system. This helps shift the emphasis of aetiological models towards reward processing, particularly in the context of illness-compatible cues. Furthermore, it suggests that continuing to explore the utility and valued nature of AN in the patient's life would be a useful inclusion in treatment and prevention models. Copyright © 2015. Published by Elsevier Ltd.

  7. Development and Application of a Three-Dimensional Finite Element Vapor Intrusion Model

    PubMed Central

    Pennell, Kelly G.; Bozkurt, Ozgur; Suuberg, Eric M.

    2010-01-01

    Details of a three-dimensional finite element model of soil vapor intrusion, including the overall modeling process and the stepwise approach, are provided. The model is a quantitative modeling tool that can help guide vapor intrusion characterization efforts. It solves the soil gas continuity equation coupled with the chemical transport equation, allowing for both advective and diffusive transport. Three-dimensional pressure, velocity, and chemical concentration fields are produced from the model. Results from simulations involving common site features, such as impervious surfaces, porous foundation sub-base material, and adjacent structures are summarized herein. The results suggest that site-specific features are important to consider when characterizing vapor intrusion risks. More importantly, the results suggest that soil gas or subslab gas samples taken without proper regard for particular site features may not be suitable for evaluating vapor intrusion risks; rather, careful attention needs to be given to the many factors that affect chemical transport into and around buildings. PMID:19418819

  8. Southwestern Pine Forests Likely to Disappear

    ScienceCinema

    McDowell, Nathan

    2018-01-16

    A new study, led by Los Alamos National Laboratory's Nathan McDowell, suggests that widespread loss of a major forest type, the pine-juniper woodlands of the Southwestern U.S., could be wiped out by the end of this century due to climate change, and that conifers throughout much of the Northern Hemisphere may be on a similar trajectory. New results, reported in the journal Nature Climate Change, suggest that global models may underestimate predictions of forest death. McDowell and his large international team strove to provide the missing pieces of understanding tree death at three levels: plant, regional and global. The team rigorously developed and evaluated multiple process-based and empirical models against experimental results, and then compared these models to results from global vegetation models to examine independent simulations. They discovered that the global models simulated mortality throughout the Northern Hemisphere that was of similar magnitude, but much broader spatial scale, as the evaluated ecosystem models predicted for in the Southwest.

  9. Southwestern Pine Forests Likely to Disappear

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDowell, Nathan

    A new study, led by Los Alamos National Laboratory's Nathan McDowell, suggests that widespread loss of a major forest type, the pine-juniper woodlands of the Southwestern U.S., could be wiped out by the end of this century due to climate change, and that conifers throughout much of the Northern Hemisphere may be on a similar trajectory. New results, reported in the journal Nature Climate Change, suggest that global models may underestimate predictions of forest death. McDowell and his large international team strove to provide the missing pieces of understanding tree death at three levels: plant, regional and global. The teammore » rigorously developed and evaluated multiple process-based and empirical models against experimental results, and then compared these models to results from global vegetation models to examine independent simulations. They discovered that the global models simulated mortality throughout the Northern Hemisphere that was of similar magnitude, but much broader spatial scale, as the evaluated ecosystem models predicted for in the Southwest.« less

  10. Spatiotemporal patterns of terrestrial gross primary production: A review

    NASA Astrophysics Data System (ADS)

    Anav, Alessandro; Friedlingstein, Pierre; Beer, Christian; Ciais, Philippe; Harper, Anna; Jones, Chris; Murray-Tortarolo, Guillermo; Papale, Dario; Parazoo, Nicholas C.; Peylin, Philippe; Piao, Shilong; Sitch, Stephen; Viovy, Nicolas; Wiltshire, Andy; Zhao, Maosheng

    2015-09-01

    Great advances have been made in the last decade in quantifying and understanding the spatiotemporal patterns of terrestrial gross primary production (GPP) with ground, atmospheric, and space observations. However, although global GPP estimates exist, each data set relies upon assumptions and none of the available data are based only on measurements. Consequently, there is no consensus on the global total GPP and large uncertainties exist in its benchmarking. The objective of this review is to assess how the different available data sets predict the spatiotemporal patterns of GPP, identify the differences among data sets, and highlight the main advantages/disadvantages of each data set. We compare GPP estimates for the historical period (1990-2009) from two observation-based data sets (Model Tree Ensemble and Moderate Resolution Imaging Spectroradiometer) to coupled carbon-climate models and terrestrial carbon cycle models from the Fifth Climate Model Intercomparison Project and TRENDY projects and to a new hybrid data set (CARBONES). Results show a large range in the mean global GPP estimates. The different data sets broadly agree on GPP seasonal cycle in terms of phasing, while there is still discrepancy on the amplitude. For interannual variability (IAV) and trends, there is a clear separation between the observation-based data that show little IAV and trend, while the process-based models have large GPP variability and significant trends. These results suggest that there is an urgent need to improve observation-based data sets and develop carbon cycle modeling with processes that are currently treated either very simplistically to correctly estimate present GPP and better quantify the future uptake of carbon dioxide by the world's vegetation.

  11. Representation of ocean-atmosphere processes associated with extended monsoon episodes over South Asia in CFSv2

    NASA Astrophysics Data System (ADS)

    Mohan, T. S.; Annamalai, H.; Marx, Larry; Huang, Bohua; Kinter, James

    2018-02-01

    In the present study, we analyze 30-years output from free run solutions of CFSv2 coupled model to assess the model’s representation of extended (>7 days) active and break monsoon episodes over south Asia. Process based diagnostics is applied to the individual and composite events to identify precursor signals in both ocean and atmospheric variables. Our examination suggests that CFSv2, like most coupled models, depict systematic biases in variables important for ocean-atmosphere interactions. Nevertheless, model solutions capture many aspects of monsoon extended break and active episodes realistically, encouraging us to apply process-based diagnostics. Diagnostics reveal that sea surface temperature (SST) variations over the northern Bay of Bengal where the climatological mixed-layer is thin, lead the in-situ precipitation anomalies by about 8 (10) days during extended active (break) episodes, and the precipitation anomalies over central India by 10-14 days. Mixed-layer heat budget analysis indicates for a close correspondence between SST tendency and net surface heat flux (Q_net). MSE budgets indicate that horizontal moisture advection to be a coherent precursor signal ( 10 days) during both extended break (dry advection) and active (moist advection) events. The lead timings in these precursor signals in CFSv2 solutions will be of potential use to monitor and predict extended monsoon episodes. Diagnostics, however, also indicate that for about 1/3 of the identified extended break and active episodes, inconsistencies in budget terms suggest precursor signals could lead to false alarms. Apart from false alarms, compared to observations, CFSv2 systematically simulates a greater number of extended monsoon active episodes.

  12. An Integrated Psychosocial Model of Relatives' Decision About Deceased Organ Donation (IMROD): Joining Pieces of the Puzzle

    PubMed Central

    López, Jorge S.; Soria-Oliver, Maria; Aramayona, Begoña; García-Sánchez, Rubén; Martínez, José M.; Martín, María J.

    2018-01-01

    Organ transplantation remains currently limited because the demand for organs far exceeds the supply. Though organ procurement is a complex process involving social, organizational, and clinical factors, one of the most relevant limitations of organ availability is family refusal to donate organs of a deceased relative. In the past decades, a remarkable corpus of evidence about the factors conditioning relatives' consent has been generated. However, research in the field has been carried out mainly by means of merely empirical approaches, and only partial attempts have been made to integrate the existing empirical evidence within conceptual and theoretically based frameworks. Accordingly, this work articulates the proposal of an Integrated Psychosocial Model of Relatives' Organ Donation (IMROD) which offers a systematic view of the factors and psychosocial processes involved in family decision and their interrelations. Relatives' experience is conceptualized as a decision process about the possibility of vicariously performing an altruistic behavior that takes place under one of the most stressful experiences of one's lifetime and in the context of interaction with different healthcare professionals. Drawing on this, in the proposed model, the influence of the implied factors and their interrelations/interactions are structured and interpreted according to their theoretically based relation with processes like rational/heuristic decision-making, uncertainty, stress, bereavement, emotional reactions, sense of reciprocity, sense of freedom to decide, and attitudes/intentions toward one's own and the deceased's organ donation. Our model also develops a processual perspective and suggests different decisional scenarios that may be reached as a result of the combinations of the considered factors. Each of these scenarios may imply different balances between factors that enhance or hinder donation, such as different levels of uncertainty and potential decisional conflict. Throughout our work, current controversial or inconsistent results are discussed and interpreted on the basis of the relationships that are posited in the proposed model. Finally, we suggest that the structure of the relationships and interactions contained in our model can be used by future research to guide the formulation of hypotheses and the interpretation of results. In this sense, specific guidelines and research questions are also proposed. PMID:29692744

  13. An Integrated Psychosocial Model of Relatives' Decision About Deceased Organ Donation (IMROD): Joining Pieces of the Puzzle.

    PubMed

    López, Jorge S; Soria-Oliver, Maria; Aramayona, Begoña; García-Sánchez, Rubén; Martínez, José M; Martín, María J

    2018-01-01

    Organ transplantation remains currently limited because the demand for organs far exceeds the supply. Though organ procurement is a complex process involving social, organizational, and clinical factors, one of the most relevant limitations of organ availability is family refusal to donate organs of a deceased relative. In the past decades, a remarkable corpus of evidence about the factors conditioning relatives' consent has been generated. However, research in the field has been carried out mainly by means of merely empirical approaches, and only partial attempts have been made to integrate the existing empirical evidence within conceptual and theoretically based frameworks. Accordingly, this work articulates the proposal of an Integrated Psychosocial Model of Relatives' Organ Donation (IMROD) which offers a systematic view of the factors and psychosocial processes involved in family decision and their interrelations. Relatives' experience is conceptualized as a decision process about the possibility of vicariously performing an altruistic behavior that takes place under one of the most stressful experiences of one's lifetime and in the context of interaction with different healthcare professionals. Drawing on this, in the proposed model, the influence of the implied factors and their interrelations/interactions are structured and interpreted according to their theoretically based relation with processes like rational/heuristic decision-making, uncertainty, stress, bereavement, emotional reactions, sense of reciprocity, sense of freedom to decide, and attitudes/intentions toward one's own and the deceased's organ donation. Our model also develops a processual perspective and suggests different decisional scenarios that may be reached as a result of the combinations of the considered factors. Each of these scenarios may imply different balances between factors that enhance or hinder donation, such as different levels of uncertainty and potential decisional conflict. Throughout our work, current controversial or inconsistent results are discussed and interpreted on the basis of the relationships that are posited in the proposed model. Finally, we suggest that the structure of the relationships and interactions contained in our model can be used by future research to guide the formulation of hypotheses and the interpretation of results. In this sense, specific guidelines and research questions are also proposed.

  14. Processing of angular motion and gravity information through an internal model.

    PubMed

    Laurens, Jean; Straumann, Dominik; Hess, Bernhard J M

    2010-09-01

    The vestibular organs in the base of the skull provide important information about head orientation and motion in space. Previous studies have suggested that both angular velocity information from the semicircular canals and information about head orientation and translation from the otolith organs are centrally processed in an internal model of head motion, using the principles of optimal estimation. This concept has been successfully applied to model behavioral responses to classical vestibular motion paradigms. This study measured the dynamic of the vestibuloocular reflex during postrotatory tilt, tilt during the optokinetic afternystagmus, and off-vertical axis rotation. The influence of otolith signal on the VOR was systematically varied by using a series of tilt angles. We found that the time constants of responses varied almost identically as a function of gravity in these paradigms. We show that Bayesian modeling could predict the experimental results in an accurate and consistent manner. In contrast to other approaches, the Bayesian model also provides a plausible explanation of why these vestibulooculo motor responses occur as a consequence of an internal process of optimal motion estimation.

  15. Space Shuttle processing - A case study in artificial intelligence

    NASA Technical Reports Server (NTRS)

    Mollikarimi, Cindy; Gargan, Robert; Zweben, Monte

    1991-01-01

    A scheduling system incorporating AI is described and applied to the automated processing of the Space Shuttle. The unique problem of addressing the temporal, resource, and orbiter-configuration requirements of shuttle processing is described with comparisons to traditional project management for manufacturing processes. The present scheduling system is developed to handle the late inputs and complex programs that characterize shuttle processing by incorporating fixed preemptive scheduling, constraint-based simulated annealing, and the characteristics of an 'anytime' algorithm. The Space-Shuttle processing environment is modeled with 500 activities broken down into 4000 subtasks and with 1600 temporal constraints, 8000 resource constraints, and 3900 state requirements. The algorithm is shown to scale to very large problems and maintain anytime characteristics suggesting that an automated scheduling process is achievable and potentially cost-effective.

  16. Aggregation Trade Offs in Family Based Recommendations

    NASA Astrophysics Data System (ADS)

    Berkovsky, Shlomo; Freyne, Jill; Coombe, Mac

    Personalized information access tools are frequently based on collaborative filtering recommendation algorithms. Collaborative filtering recommender systems typically suffer from a data sparsity problem, where systems do not have sufficient user data to generate accurate and reliable predictions. Prior research suggested using group-based user data in the collaborative filtering recommendation process to generate group-based predictions and partially resolve the sparsity problem. Although group recommendations are less accurate than personalized recommendations, they are more accurate than general non-personalized recommendations, which are the natural fall back when personalized recommendations cannot be generated. In this work we present initial results of a study that exploits the browsing logs of real families of users gathered in an eHealth portal. The browsing logs allowed us to experimentally compare the accuracy of two group-based recommendation strategies: aggregated group models and aggregated predictions. Our results showed that aggregating individual models into group models resulted in more accurate predictions than aggregating individual predictions into group predictions.

  17. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  18. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  19. Catchments as non-linear filters: evaluating data-driven approaches for spatio-temporal predictions in ungauged basins

    NASA Astrophysics Data System (ADS)

    Bellugi, D. G.; Tennant, C.; Larsen, L.

    2016-12-01

    Catchment and climate heterogeneity complicate prediction of runoff across time and space, and resulting parameter uncertainty can lead to large accumulated errors in hydrologic models, particularly in ungauged basins. Recently, data-driven modeling approaches have been shown to avoid the accumulated uncertainty associated with many physically-based models, providing an appealing alternative for hydrologic prediction. However, the effectiveness of different methods in hydrologically and geomorphically distinct catchments, and the robustness of these methods to changing climate and changing hydrologic processes remain to be tested. Here, we evaluate the use of machine learning techniques to predict daily runoff across time and space using only essential climatic forcing (e.g. precipitation, temperature, and potential evapotranspiration) time series as model input. Model training and testing was done using a high quality dataset of daily runoff and climate forcing data for 25+ years for 600+ minimally-disturbed catchments (drainage area range 5-25,000 km2, median size 336 km2) that cover a wide range of climatic and physical characteristics. Preliminary results using Support Vector Regression (SVR) suggest that in some catchments this nonlinear-based regression technique can accurately predict daily runoff, while the same approach fails in other catchments, indicating that the representation of climate inputs and/or catchment filter characteristics in the model structure need further refinement to increase performance. We bolster this analysis by using Sparse Identification of Nonlinear Dynamics (a sparse symbolic regression technique) to uncover the governing equations that describe runoff processes in catchments where SVR performed well and for ones where it performed poorly, thereby enabling inference about governing processes. This provides a robust means of examining how catchment complexity influences runoff prediction skill, and represents a contribution towards the integration of data-driven inference and physically-based models.

  20. Identifying Hydrogeological Controls of Catchment Low-Flow Dynamics Using Physically Based Modelling

    NASA Astrophysics Data System (ADS)

    Cochand, F.; Carlier, C.; Staudinger, M.; Seibert, J.; Hunkeler, D.; Brunner, P.

    2017-12-01

    Identifying key catchment characteristics and processes which control the hydrological response under low-flow conditions is important to assess the catchments' vulnerability to dry periods. In the context of a Swiss Federal Office for the Environment (FOEN) project, the low-flow behaviours of two mountainous catchments were investigated. These neighboring catchments are characterized by the same meteorological conditions, but feature completely different river flow dynamics. The Roethenbach is characterized by high peak flows and low mean flows. Conversely, the Langete is characterized by relatively low peak flows and high mean flow rates. To understand the fundamentally different behaviour of the two catchments, a physically-based surface-subsurface flow HydroGeoSphere (HGS) model for each catchment was developed. The main advantage of a physically-based model is its ability to realistically reproduce processes which play a key role during low-flow periods such as surface-subsurface interactions or evapotranspiration. Both models were calibrated to reproduce measured groundwater heads and the surface flow dynamics. Subsequently, the calibrated models were used to explore the fundamental physics that control hydrological processes during low-flow periods. To achieve this, a comparative sensitivity analysis of model parameters of both catchments was carried out. Results show that the hydraulic conductivity of the bedrock (and weathered bedrock) controls the catchment water dynamics in both models. Conversely, the properties of other geological formations such as alluvial aquifer or soil layer hydraulic conductivity or porosity play a less important role. These results change significantly our perception of the streamflow catchment dynamics and more specifically the way to assess catchment vulnerability to dry period. This study suggests that by analysing catchment scale bedrock properties, the catchment dynamics and the vulnerability to dry period may be assessed.

  1. Space-time dynamics of Stem Cell Niches: a unified approach for Plants.

    PubMed

    Pérez, Maria Del Carmen; López, Alejandro; Padilla, Pablo

    2013-06-01

    Many complex systems cannot be analyzed using traditional mathematical tools, due to their irreducible nature. This makes it necessary to develop models that can be implemented computationally to simulate their evolution. Examples of these models are cellular automata, evolutionary algorithms, complex networks, agent-based models, symbolic dynamics and dynamical systems techniques. We review some representative approaches to model the stem cell niche in Arabidopsis thaliana and the basic biological mechanisms that underlie its formation and maintenance. We propose a mathematical model based on cellular automata for describing the space-time dynamics of the stem cell niche in the root. By making minimal assumptions on the cell communication process documented in experiments, we classify the basic developmental features of the stem-cell niche, including the basic structural architecture, and suggest that they could be understood as the result of generic mechanisms given by short and long range signals. This could be a first step in understanding why different stem cell niches share similar topologies, not only in plants. Also the fact that this organization is a robust consequence of the way information is being processed by the cells and to some extent independent of the detailed features of the signaling mechanism.

  2. Space-time dynamics of stem cell niches: a unified approach for plants.

    PubMed

    Pérez, Maria del Carmen; López, Alejandro; Padilla, Pablo

    2013-04-02

    Many complex systems cannot be analyzed using traditional mathematical tools, due to their irreducible nature. This makes it necessary to develop models that can be implemented computationally to simulate their evolution. Examples of these models are cellular automata, evolutionary algorithms, complex networks, agent-based models, symbolic dynamics and dynamical systems techniques. We review some representative approaches to model the stem cell niche in Arabidopsis thaliana and the basic biological mechanisms that underlie its formation and maintenance. We propose a mathematical model based on cellular automata for describing the space-time dynamics of the stem cell niche in the root. By making minimal assumptions on the cell communication process documented in experiments, we classify the basic developmental features of the stem-cell niche, including the basic structural architecture, and suggest that they could be understood as the result of generic mechanisms given by short and long range signals. This could be a first step in understanding why different stem cell niches share similar topologies, not only in plants. Also the fact that this organization is a robust consequence of the way information is being processed by the cells and to some extent independent of the detailed features of the signaling mechanism.

  3. Multi-model comparison on the effects of climate change on tree species in the eastern U.S.: results from an enhanced niche model and process-based ecosystem and landscape models

    Treesearch

    Louis R. Iverson; Frank R. Thompson; Stephen Matthews; Matthew Peters; Anantha Prasad; William D. Dijak; Jacob Fraser; Wen J. Wang; Brice Hanberry; Hong He; Maria Janowiak; Patricia Butler; Leslie Brandt; Chris Swanston

    2016-01-01

    Context. Species distribution models (SDM) establish statistical relationships between the current distribution of species and key attributes whereas process-based models simulate ecosystem and tree species dynamics based on representations of physical and biological processes. TreeAtlas, which uses DISTRIB SDM, and Linkages and LANDIS PRO, process...

  4. Does the concept of resilience contribute to understanding good quality of life in the context of epilepsy?

    PubMed

    Ring, Adele; Jacoby, Ann; Baker, Gus A; Marson, Anthony; Whitehead, Margaret M

    2016-03-01

    A significant body of research highlights negative impacts of epilepsy for individual quality of life (QOL). Poor seizure control is frequently associated with reporting of poor QOL and good seizure control with good QOL; however, this is not a universal finding. Evidence suggests that some people enjoy good QOL despite ongoing seizures while others report poor QOL despite good seizure control. Understanding the factors that influence QOL for people with epilepsy and the processes via which such factors exert their influence is central to the development of interventions to support people with epilepsy to experience the best possible QOL. We present findings of a qualitative investigation exploring influences and processes on QOL for people with epilepsy. We describe the clinical, psychological, and social factors contributing to QOL. In particular, we focus on the value of the concept of resilience for understanding quality of life in epilepsy. Based on our analysis, we propose a model of resilience wherein four key component sets of factors interact to determine QOL. This model reflects the fluid nature of resilience that, we suggest, is subject to change based on shifts within the individual components and the interactions between them. The model offers a representation of the complex influences that act and interact to either mitigate or further compound the negative impacts of epilepsy on individual QOL. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. The large-scale organization of shape processing in the ventral and dorsal pathways

    PubMed Central

    Culham, Jody C; Plaut, David C; Behrmann, Marlene

    2017-01-01

    Although shape perception is considered a function of the ventral visual pathway, evidence suggests that the dorsal pathway also derives shape-based representations. In two psychophysics and neuroimaging experiments, we characterized the response properties, topographical organization and perceptual relevance of these representations. In both pathways, shape sensitivity increased from early visual cortex to extrastriate cortex but then decreased in anterior regions. Moreover, the lateral aspect of the ventral pathway and posterior regions of the dorsal pathway were sensitive to the availability of fundamental shape properties, even for unrecognizable images. This apparent representational similarity between the posterior-dorsal and lateral-ventral regions was corroborated by a multivariate analysis. Finally, as with ventral pathway, the activation profile of posterior dorsal regions was correlated with recognition performance, suggesting a possible contribution to perception. These findings challenge a strict functional dichotomy between the pathways and suggest a more distributed model of shape processing. PMID:28980938

  6. NEVER forget: negative emotional valence enhances recapitulation.

    PubMed

    Bowen, Holly J; Kark, Sarah M; Kensinger, Elizabeth A

    2018-06-01

    A hallmark feature of episodic memory is that of "mental time travel," whereby an individual feels they have returned to a prior moment in time. Cognitive and behavioral neuroscience methods have revealed a neurobiological counterpart: Successful retrieval often is associated with reactivation of a prior brain state. We review the emerging literature on memory reactivation and recapitulation, and we describe evidence for the effects of emotion on these processes. Based on this review, we propose a new model: Negative Emotional Valence Enhances Recapitulation (NEVER). This model diverges from existing models of emotional memory in three key ways. First, it underscores the effects of emotion during retrieval. Second, it stresses the importance of sensory processing to emotional memory. Third, it emphasizes how emotional valence - whether an event is negative or positive - affects the way that information is remembered. The model specifically proposes that, as compared to positive events, negative events both trigger increased encoding of sensory detail and elicit a closer resemblance between the sensory encoding signature and the sensory retrieval signature. The model also proposes that negative valence enhances the reactivation and storage of sensory details over offline periods, leading to a greater divergence between the sensory recapitulation of negative and positive memories over time. Importantly, the model proposes that these valence-based differences occur even when events are equated for arousal, thus rendering an exclusively arousal-based theory of emotional memory insufficient. We conclude by discussing implications of the model and suggesting directions for future research to test the tenets of the model.

  7. Development of an Online Smartphone-Based eLearning Nutrition Education Program for Low-Income Individuals.

    PubMed

    Stotz, Sarah; Lee, Jung Sun

    2018-01-01

    The objective of this report was to describe the development process of an innovative smartphone-based electronic learning (eLearning) nutrition education program targeted to Supplemental Nutrition Assistance Program-Education-eligible individuals, entitled Food eTalk. Lessons learned from the Food eTalk development process suggest that it is critical to include all key team members from the program's inception using effective inter-team communication systems, understand the unique resources needed, budget ample time for development, and employ an iterative development and evaluation model. These lessons have implications for researchers and funding agencies in developing an innovative evidence-based eLearning nutrition education program to an increasingly technology-savvy, low-income audience. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  8. What motivates people to participate more in community-based coalitions?

    PubMed

    Wells, Rebecca; Ward, Ann J; Feinberg, Mark; Alexander, Jeffrey A

    2008-09-01

    The purpose of this study was to identify potential opportunities for improving member participation in community-based coalitions. We hypothesized that opportunities for influence and process competence would each foster higher levels of individual member participation. We tested these hypotheses in a sample of 818 members within 79 youth-oriented coalitions. Opportunities for influence were measured as members' perceptions of an inclusive board leadership style and members' reported committee roles. Coalition process competence was measured through member perceptions of strategic board directedness and meeting effectiveness. Members reported three types of participation within meetings as well as how much time they devoted to coalition business beyond meetings. Generalized linear models accommodated clustering of individuals within coalitions. Opportunities for influence were associated with individuals' participation both within and beyond meetings. Coalition process competence was not associated with participation. These results suggest that leadership inclusivity rather than process competence may best facilitate member participation.

  9. Interactions between facial emotion and identity in face processing: evidence based on redundancy gains.

    PubMed

    Yankouskaya, Alla; Booth, David A; Humphreys, Glyn

    2012-11-01

    Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247-279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.

  10. Agile science: creating useful products for behavior change in the real world.

    PubMed

    Hekler, Eric B; Klasnja, Predrag; Riley, William T; Buman, Matthew P; Huberty, Jennifer; Rivera, Daniel E; Martin, Cesar A

    2016-06-01

    Evidence-based practice is important for behavioral interventions but there is debate on how best to support real-world behavior change. The purpose of this paper is to define products and a preliminary process for efficiently and adaptively creating and curating a knowledge base for behavior change for real-world implementation. We look to evidence-based practice suggestions and draw parallels to software development. We argue to target three products: (1) the smallest, meaningful, self-contained, and repurposable behavior change modules of an intervention; (2) "computational models" that define the interaction between modules, individuals, and context; and (3) "personalization" algorithms, which are decision rules for intervention adaptation. The "agile science" process includes a generation phase whereby contender operational definitions and constructs of the three products are created and assessed for feasibility and an evaluation phase, whereby effect size estimates/casual inferences are created. The process emphasizes early-and-often sharing. If correct, agile science could enable a more robust knowledge base for behavior change.

  11. Electroencephalography Based Fusion Two-Dimensional (2D)-Convolution Neural Networks (CNN) Model for Emotion Recognition System.

    PubMed

    Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug

    2018-04-30

    The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.

  12. A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2017-01-01

    Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.

  13. A model of oil-generation in a waterlogged and closed system

    NASA Astrophysics Data System (ADS)

    Zhigao, He

    This paper presents a new model on synthetic effects on oil-generation in a waterlogged and closed system. It is suggested based on information about oil in high pressure layers (including gas dissolved in oil), marsh gas and its fermentative solution, fermentation processes and mechanisms, gaseous hydrocarbons of carbonate rocks by acid treatment, oil-field water, recent and ancient sediments, and simulation experiments of artificial marsh gas and biological action. The model differs completely from the theory of oil-generation by thermal degradation of kerogen but stresses the synthetic effects of oil-generation in special waterlogged and closed geological systems, the importance of pressure in oil-forming processes, and direct oil generation by micro-organisms. Oil generated directly by micro-organisms is a particular biochemical reaction. Another feature of this model is that generation, migration and accumulation of petroleum are considered as a whole.

  14. A Bayesian methodological framework for accommodating interannual variability of nutrient loading with the SPARROW model

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan

    2012-10-01

    Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.

  15. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  16. Modeling Markov switching ARMA-GARCH neural networks models and an application to forecasting stock returns.

    PubMed

    Bildirici, Melike; Ersin, Özgür

    2014-01-01

    The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100). Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray's MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray's MS-GARCH model. Therefore, the models are promising for various economic applications.

  17. Modeling Markov Switching ARMA-GARCH Neural Networks Models and an Application to Forecasting Stock Returns

    PubMed Central

    Bildirici, Melike; Ersin, Özgür

    2014-01-01

    The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100). Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray's MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray's MS-GARCH model. Therefore, the models are promising for various economic applications. PMID:24977200

  18. Simulating single word processing in the classic aphasia syndromes based on the Wernicke-Lichtheim-Geschwind theory.

    PubMed

    Weems, Scott A; Reggia, James A

    2006-09-01

    The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG theory can provide a deeper understanding of which of its features are plausible and where the theory fails. As a first step in this direction, we created a model of the interconnected left and right neocortical areas that are most relevant to the WLG theory, and used it to study visual-confrontation naming, auditory repetition, and auditory comprehension performance. No specific functionality is assigned a priori to model cortical regions, other than that implicitly present due to their locations in the cortical network and a higher learning rate in left hemisphere regions. Following learning, the model successfully simulates confrontation naming and word repetition, and acquires a unique internal representation in parietal regions for each named object. Simulated lesions to the language-dominant cortical regions produce patterns of single word processing impairment reminiscent of those postulated historically in the classic aphasia syndromes. These results indicate that WLG theory, instantiated as a simple interconnected network of model neocortical regions familiar to any neuropsychologist/neurologist, captures several fundamental "low-level" aspects of neurobiological word processing and their impairment in aphasia.

  19. Dynamics of cross-bridge cycling, ATP hydrolysis, force generation, and deformation in cardiac muscle

    PubMed Central

    Tewari, Shivendra G.; Bugenhagen, Scott M.; Palmer, Bradley M.; Beard, Daniel A.

    2015-01-01

    Despite extensive study over the past six decades the coupling of chemical reaction and mechanical processes in muscle dynamics is not well understood. We lack a theoretical description of how chemical processes (metabolite binding, ATP hydrolysis) influence and are influenced by mechanical processes (deformation and force generation). To address this need, a mathematical model of the muscle cross-bridge (XB) cycle based on Huxley’s sliding filament theory is developed that explicitly accounts for the chemical transformation events and the influence of strain on state transitions. The model is identified based on elastic and viscous moduli data from mouse and rat myocardial strips over a range of perturbation frequencies, and MgATP and inorganic phosphate (Pi) concentrations. Simulations of the identified model reproduce the observed effects of MgATP and MgADP on the rate of force development. Furthermore, simulations reveal that the rate of force re-development measured in slack-restretch experiments is not directly proportional to the rate of XB cycling. For these experiments, the model predicts that the observed increase in the rate of force generation with increased Pi concentration is due to inhibition of cycle turnover by Pi. Finally, the model captures the observed phenomena of force yielding suggesting that it is a result of rapid detachment of stretched attached myosin heads. PMID:25681584

  20. An approach to developing an integrated pyroprocessing simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyo Jik; Ko, Won Il; Choi, Sung Yeol

    Pyroprocessing has been studied for a decade as one of the promising fuel recycling options in Korea. We have built a pyroprocessing integrated inactive demonstration facility (PRIDE) to assess the feasibility of integrated pyroprocessing technology and scale-up issues of the processing equipment. Even though such facility cannot be replaced with a real integrated facility using spent nuclear fuel (SF), many insights can be obtained in terms of the world's largest integrated pyroprocessing operation. In order to complement or overcome such limited test-based research, a pyroprocessing Modelling and simulation study began in 2011. The Korea Atomic Energy Research Institute (KAERI) suggestedmore » a Modelling architecture for the development of a multi-purpose pyroprocessing simulator consisting of three-tiered models: unit process, operation, and plant-level-model. The unit process model can be addressed using governing equations or empirical equations as a continuous system (CS). In contrast, the operation model describes the operational behaviors as a discrete event system (DES). The plant-level model is an integrated model of the unit process and an operation model with various analysis modules. An interface with different systems, the incorporation of different codes, a process-centered database design, and a dynamic material flow are discussed as necessary components for building a framework of the plant-level model. As a sample model that contains methods decoding the above engineering issues was thoroughly reviewed, the architecture for building the plant-level-model was verified. By analyzing a process and operation-combined model, we showed that the suggested approach is effective for comprehensively understanding an integrated dynamic material flow. This paper addressed the current status of the pyroprocessing Modelling and simulation activity at KAERI, and also predicted its path forward.« less

  1. Collaboration or negotiation: two ways of interacting suggest how shared thinking develops.

    PubMed

    Mejía-Arauz, Rebeca; Rogoff, Barbara; Dayton, Andrew; Henne-Ochoa, Richard

    2018-03-09

    This paper contrasts two ways that shared thinking can be conceptualized: as negotiation, where individuals join their separate ideas, or collaboration, as people mutually engage together in a unified process, as an ensemble. We argue that these paradigms are culturally based, with the negotiation model fitting within an assumption system of separate entities-an assumption system we believe to be common in psychology and in middle-class European American society-and the collaboration model fitting within a holistic worldview that appears to be common in Indigenous-heritage communities of the Americas. We discuss cultural differences in children's interactions-as negotiation or collaboration-that suggest how these distinct paradigms develop. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. In-vitro development of vitrified-warmed bovine oocytes after activation may be predicted based on mathematical modelling of cooling and warming rates during vitrification, storage and sample removal.

    PubMed

    Sansinena, Marina; Santos, Maria Victoria; Chirife, Jorge; Zaritzky, Noemi

    2018-05-01

    Heat transfer during cooling and warming is difficult to measure in cryo-devices; mathematical modelling is an alternative method that can describe these processes. In this study, we tested the validity of one such model by assessing in-vitro development of vitrified and warmed bovine oocytes after parthenogenetic activation and culture. The viability of oocytes vitrified in four different cryo-devices was assessed. Consistent with modelling predictions, oocytes vitrified using cryo-devices with the highest modelled cooling rates had significantly (P < 0.05) better cleavage and blastocyst formation rates. We then evaluated a two-step sample removal process, in which oocytes were held in nitrogen vapour for 15 s to simulate sample identification during clinical application, before being removed completely and warmed. Oocytes exposed to this procedure showed reduced developmental potential, according to the model, owing to thermodynamic instability and devitrification at relatively low temperatures. These findings suggest that cryo-device selection and handling, including method of removal from nitrogen storage, are critical to survival of vitrified oocytes. Limitations of the study include use of parthenogenetically activated rather than fertilized ova and lack of physical measurement of recrystallization. We suggest mathematical modelling could be used to predict the effect of critical steps in cryopreservation. Copyright © 2018 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  3. Intensity dependent spread theory

    NASA Technical Reports Server (NTRS)

    Holben, Richard

    1990-01-01

    The Intensity Dependent Spread (IDS) procedure is an image-processing technique based on a model of the processing which occurs in the human visual system. IDS processing is relevant to many aspects of machine vision and image processing. For quantum limited images, it produces an ideal trade-off between spatial resolution and noise averaging, performs edge enhancement thus requiring only mean-crossing detection for the subsequent extraction of scene edges, and yields edge responses whose amplitudes are independent of scene illumination, depending only upon the ratio of the reflectance on the two sides of the edge. These properties suggest that the IDS process may provide significant bandwidth reduction while losing only minimal scene information when used as a preprocessor at or near the image plane.

  4. Early Attachment-Figure Separation and Increased Risk for Later Depression: Potential Mediation by Proinflammatory Processes

    PubMed Central

    Hennessy, Michael B.; Deak, Terrence; Schiml-Webb, Patricia A.

    2009-01-01

    Early maternal separation and other disruptions of attachment relations are known to increase risk for the later onset of depressive illness in vulnerable individuals. It is suggested here that sensitization involving proinflammatory processes may contribute to this effect. This argument is based on: (1) current notions of the role of proinflammatory cytokines in depressive illness; (2) evidence that proinflammatory cytokines mediate depressive-like behavior during separation in a rodent model of infant attachment; and (3) comparisons of the effects of early proinflammatory activation versus maternal separation on later proinflammatory activity and biobehavioral processes related to depression. The possible interaction of proinflammatory processes and corticotropin-releasing factor in the sensitization process is discussed. PMID:20359585

  5. Double dynamic scaling in human communication dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Shengfeng; Feng, Xin; Wu, Ye; Xiao, Jinhua

    2017-05-01

    In the last decades, human behavior has been deeply understanding owing to the huge quantities data of human behavior available for study. The main finding in human dynamics shows that temporal processes consist of high-activity bursty intervals alternating with long low-activity periods. A model, assuming the initiator of bursty follow a Poisson process, is widely used in the modeling of human behavior. Here, we provide further evidence for the hypothesis that different bursty intervals are independent. Furthermore, we introduce a special threshold to quantitatively distinguish the time scales of complex dynamics based on the hypothesis. Our results suggest that human communication behavior is a composite process of double dynamics with midrange memory length. The method for calculating memory length would enhance the performance of many sequence-dependent systems, such as server operation and topic identification.

  6. PubMed related articles: a probabilistic topic-based model for content similarity

    PubMed Central

    Lin, Jimmy; Wilbur, W John

    2007-01-01

    Background We present a probabilistic topic-based model for content similarity called pmra that underlies the related article search feature in PubMed. Whether or not a document is about a particular topic is computed from term frequencies, modeled as Poisson distributions. Unlike previous probabilistic retrieval models, we do not attempt to estimate relevance–but rather our focus is "relatedness", the probability that a user would want to examine a particular document given known interest in another. We also describe a novel technique for estimating parameters that does not require human relevance judgments; instead, the process is based on the existence of MeSH ® in MEDLINE ®. Results The pmra retrieval model was compared against bm25, a competitive probabilistic model that shares theoretical similarities. Experiments using the test collection from the TREC 2005 genomics track shows a small but statistically significant improvement of pmra over bm25 in terms of precision. Conclusion Our experiments suggest that the pmra model provides an effective ranking algorithm for related article search. PMID:17971238

  7. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    PubMed

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. In Situ μGISAXS: II. Thaumatin Crystal Growth Kinetic

    PubMed Central

    Gebhardt, Ronald; Pechkova, Eugenia; Riekel, Christian; Nicolini, Claudio

    2010-01-01

    The formation of thaumatin crystals by Langmuir-Blodgett (LB) film nanotemplates was studied by the hanging-drop technique in a flow-through cell by synchrotron radiation micrograzing-incidence small-angle x-ray scattering. The kinetics of crystallization was measured directly on the interface of the LB film crystallization nanotemplate. The evolution of the micrograzing-incidence small-angle x-ray scattering patterns suggests that the increase in intensity in the Yoneda region is due to protein incorporation into the LB film. The intensity variation suggests several steps, which were modeled by system dynamics based on first-order differential equations. The kinetic data can be described by two processes that take place on the LB film, a first, fast, process, attributed to the crystal growth and its detachment from the LB film, and a second, slower process, attributed to an unordered association and conversion of protein on the LB film. PMID:20713011

  9. Putting the psychology back into psychological models: mechanistic versus rational approaches.

    PubMed

    Sakamoto, Yasuaki; Jones, Mattr; Love, Bradley C

    2008-09-01

    Two basic approaches to explaining the nature of the mind are the rational and the mechanistic approaches. Rational analyses attempt to characterize the environment and the behavioral outcomes that humans seek to optimize, whereas mechanistic models attempt to simulate human behavior using processes and representations analogous to those used by humans. We compared these approaches with regard to their accounts of how humans learn the variability of categories. The mechanistic model departs in subtle ways from rational principles. In particular, the mechanistic model incrementally updates its estimates of category means and variances through error-driven learning, based on discrepancies between new category members and the current representation of each category. The model yields a prediction, which we verify, regarding the effects of order manipulations that the rational approach does not anticipate. Although both rational and mechanistic models can successfully postdict known findings, we suggest that psychological advances are driven primarily by consideration of process and representation and that rational accounts trail these breakthroughs.

  10. Pursuing realistic hydrologic model under SUPERFLEX framework in a semi-humid catchment in China

    NASA Astrophysics Data System (ADS)

    Wei, Lingna; Savenije, Hubert H. G.; Gao, Hongkai; Chen, Xi

    2016-04-01

    Model realism is pursued perpetually by hydrologists for flood and drought prediction, integrated water resources management and decision support of water security. "Physical-based" distributed hydrologic models are speedily developed but they also encounter unneglectable challenges, for instance, computational time with low efficiency and parameters uncertainty. This study step-wisely tested four conceptual hydrologic models under the framework of SUPERFLEX in a small semi-humid catchment in southern Huai River basin of China. The original lumped FLEXL has hypothesized model structure of four reservoirs to represent canopy interception, unsaturated zone, subsurface flow of fast and slow components and base flow storage. Considering the uneven rainfall in space, the second model (FLEXD) is developed with same parameter set for different rain gauge controlling units. To reveal the effect of topography, terrain descriptor of height above the nearest drainage (HAND) combined with slope is applied to classify the experimental catchment into two landscapes. Then the third one (FLEXTOPO) builds different model blocks in consideration of the dominant hydrologic process corresponding to the topographical condition. The fourth one named FLEXTOPOD integrating the parallel framework of FLEXTOPO in four controlled units is designed to interpret spatial variability of rainfall patterns and topographic features. Through pairwise comparison, our results suggest that: (1) semi-distributed models (FLEXD and FLEXTOPOD) taking precipitation spatial heterogeneity into account has improved model performance with parsimonious parameter set, and (2) hydrologic model architecture with flexibility to reflect perceived dominant hydrologic processes can include the local terrain circumstances for each landscape. Hence, the modeling actions are coincided with the catchment behaviour and close to the "reality". The presented methodology is regarding hydrologic model as a tool to test our hypothesis and deepen our understanding of hydrologic processes, which will be helpful to improve modeling realism.

  11. Analyzing the Language of Therapist Empathy in Motivational Interview based Psychotherapy

    PubMed Central

    Xiao, Bo; Can, Dogan; Georgiou, Panayiotis G.; Atkins, David; Narayanan, Shrikanth S.

    2016-01-01

    Empathy is an important aspect of social communication, especially in medical and psychotherapy applications. Measures of empathy can offer insights into the quality of therapy. We use an N-gram language model based maximum likelihood strategy to classify empathic versus non-empathic utterances and report the precision and recall of classification for various parameters. High recall is obtained with unigram while bigram features achieved the highest F1-score. Based on the utterance level models, a group of lexical features are extracted at the therapy session level. The effectiveness of these features in modeling session level annotator perceptions of empathy is evaluated through correlation with expert-coded session level empathy scores. Our combined feature set achieved a correlation of 0.558 between predicted and expert-coded empathy scores. Results also suggest that the longer term empathy perception process may be more related to isolated empathic salient events. PMID:27602411

  12. Lower- Versus Higher-Income Populations In The Alternative Quality Contract: Improved Quality And Similar Spending.

    PubMed

    Song, Zirui; Rose, Sherri; Chernew, Michael E; Safran, Dana Gelb

    2017-01-01

    As population-based payment models become increasingly common, it is crucial to understand how such payment models affect health disparities. We evaluated health care quality and spending among enrollees in areas with lower versus higher socioeconomic status in Massachusetts before and after providers entered into the Alternative Quality Contract, a two-sided population-based payment model with substantial incentives tied to quality. We compared changes in process measures, outcome measures, and spending between enrollees in areas with lower and higher socioeconomic status from 2006 to 2012 (outcome measures were measured after the intervention only). Quality improved for all enrollees in the Alternative Quality Contract after their provider organizations entered the contract. Process measures improved 1.2 percentage points per year more among enrollees in areas with lower socioeconomic status than among those in areas with higher socioeconomic status. Outcome measure improvement was no different between the subgroups; neither were changes in spending. Larger or comparable improvements in quality among enrollees in areas with lower socioeconomic status suggest a potential narrowing of disparities. Strong pay-for-performance incentives within a population-based payment model could encourage providers to focus on improving quality for more disadvantaged populations. Project HOPE—The People-to-People Health Foundation, Inc.

  13. A continuous analog of run length distributions reflecting accumulated fractionation events.

    PubMed

    Yu, Zhe; Sankoff, David

    2016-11-11

    We propose a new, continuous model of the fractionation process (duplicate gene deletion after polyploidization) on the real line. The aim is to infer how much DNA is deleted at a time, based on segment lengths for alternating deleted (invisible) and undeleted (visible) regions. After deriving a number of analytical results for "one-sided" fractionation, we undertake a series of simulations that help us identify the distribution of segment lengths as a gamma with shape and rate parameters evolving over time. This leads to an inference procedure based on observed length distributions for visible and invisible segments. We suggest extensions of this mathematical and simulation work to biologically realistic discrete models, including two-sided fractionation.

  14. Unified underpinning of human mobility in the real world and cyberspace

    NASA Astrophysics Data System (ADS)

    Zhao, Yi-Ming; Zeng, An; Yan, Xiao-Yong; Wang, Wen-Xu; Lai, Ying-Cheng

    2016-05-01

    Human movements in the real world and in cyberspace affect not only dynamical processes such as epidemic spreading and information diffusion but also social and economical activities such as urban planning and personalized recommendation in online shopping. Despite recent efforts in characterizing and modeling human behaviors in both the real and cyber worlds, the fundamental dynamics underlying human mobility have not been well understood. We develop a minimal, memory-based random walk model in limited space for reproducing, with a single parameter, the key statistical behaviors characterizing human movements in both cases. The model is validated using relatively big data from mobile phone and online commerce, suggesting memory-based random walk dynamics as the unified underpinning for human mobility, regardless of whether it occurs in the real world or in cyberspace.

  15. Off-target model based OPC

    NASA Astrophysics Data System (ADS)

    Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III

    2005-11-01

    Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.

  16. Bridging process-based and empirical approaches to modeling tree growth

    Treesearch

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  17. Testing for context-dependence in a processing chain interaction among detritus-feeding aquatic insects

    PubMed Central

    DAUGHERTY, MATTHEW P.; JULIANO, STEVEN A.

    2008-01-01

    Scirtid beetles may benefit mosquitoes Ochlerotatus triseriatus (Say) by consuming whole leaves and leaving behind fine particles required by mosquito larvae. Such interactions based on the sequential use of a resource that occurs in multiple forms are known as processing chains.Models of processing chains predict that interactions can vary from commensal (0, +) to amensal (0, −), depending on how quickly resource is processed in the absence of consumers.The scirtid-O. triseriatus system was used to test the prediction derived from processing chain models that, as consumer-independent processing increases, scirtids benefit mosquitoes less. Consumer-independent processing rate was manipulated by using different leaf species that vary in decay rate, or by physically crushing a single leaf type to different degrees.Although scirtids increased the production of fine particles, the effects of scirtids on mosquitoes were weak and were not dependent on consumer-independent processing rate.In the leaf manipulation experiment, a correlation between scirtid feeding and consumer-independent processing was detected. Numerical simulations suggest that such a correlation may eliminate shifts from commensal to amensal at equilibrium; because mosquito populations are typically not at equilibrium, however, this correlation may not be important.There was evidence that mosquitoes affected scirtids negatively, which is inconsistent with the structure of processing chain interactions in models. Processing chain models need to incorporate more detail on the biology of scirtids and O. triseriatus, especially alternative mechanisms of interaction, if they are to describe scirtid-O. triseriatus dynamics accurately. PMID:19060960

  18. Phylogeography of speciation: allopatric divergence and secondary contact between outcrossing and selfing Clarkia.

    PubMed

    Pettengill, James B; Moeller, David A

    2012-09-01

    The origins of hybrid zones between parapatric taxa have been of particular interest for understanding the evolution of reproductive isolation and the geographic context of species divergence. One challenge has been to distinguish between allopatric divergence (followed by secondary contact) versus primary intergradation (parapatric speciation) as alternative divergence histories. Here, we use complementary phylogeographic and population genetic analyses to investigate the recent divergence of two subspecies of Clarkia xantiana and the formation of a hybrid zone within the narrow region of sympatry. We tested alternative phylogeographic models of divergence using approximate Bayesian computation (ABC) and found strong support for a secondary contact model and little support for a model allowing for gene flow throughout the divergence process (i.e. primary intergradation). Two independent methods for inferring the ancestral geography of each subspecies, one based on probabilistic character state reconstructions and the other on palaeo-distribution modelling, also support a model of divergence in allopatry and range expansion leading to secondary contact. The membership of individuals to genetic clusters suggests geographic substructure within each taxon where allopatric and sympatric samples are primarily found in separate clusters. We also observed coincidence and concordance of genetic clines across three types of molecular markers, which suggests that there is a strong barrier to gene flow. Taken together, our results provide evidence for allopatric divergence followed by range expansion leading to secondary contact. The location of refugial populations and the directionality of range expansion are consistent with expectations based on climate change since the last glacial maximum. Our approach also illustrates the utility of combining phylogeographic hypothesis testing with species distribution modelling and fine-scale population genetic analyses for inferring the geography of the divergence process. © 2012 Blackwell Publishing Ltd.

  19. Coupled Modeling and Field Approach to Explore Patterns of Barrier Ridge and Swale Development

    NASA Astrophysics Data System (ADS)

    Ciarletta, D. J.; Lorenzo-Trueba, J.; Shawler, J. L.; Hein, C. J.

    2017-12-01

    Previous work has suggested the morphologies of barrier ridge and swale systems potentially reflect the environmental conditions under which they developed, especially in response to sediment budget. We use this inference to examine progradational dune systems on barriers along the USA Mid-Atlantic coast, constructing a simple morphodynamic model to capture the magnitude of changes in key processes affecting the pattern of ridge and swale development. Based on our initial investigation, we demonstrate a range of potential morphological patterns generated by the interaction of longshore transport, accommodation, overwash, aeolian sand flux, and vegetation controls. The patterns are based on three basic cross-sectional morphologies describing the spacing and width of ridges. Regularly spaced ridges of roughly equal width are defined as washboards; wide platform-like ridges or complex multi-ridge dunes are described as tables; and wide swaths of open sand or poorly developed dunes are identified as pans. The inclusion of overwash, in competition with the other processes, further allows the creation of infilled swales, or baffled structures, as well as inter-ridge and backbarrier fans/flats. Model outcomes are validated via comparison to observations from barriers in Virginia, Maryland, and New Jersey. In particular, historical (post-1850) mapping of the evolution of the Fishing Point spit (Assateague Island) reveals the ability of the model to approximate the growth of structures seen in the field. We then apply the model to the development of a prehistoric progradational system on Parramore Island, VA, using field stratigraphic/chronologic data to supply input parameters and begin predictively quantifying past changes in longshore transport and accommodation. Our investigations suggest that modeling patterns of ridge and swale development preserved on modern coasts could result in novel approaches to employ barriers as archives of past environmental/climate forcing.

  20. Family-based behavioural intervention for obese children.

    PubMed

    Epstein, L H

    1996-02-01

    The family environment can contribute to the development of obesity. Parenting styles may influence the development of food preferences and the ability of a child to regulate intake. Parents and other family members arrange a common, shared environment that may be conducive to overeating or a sedentary lifestyle. Family members serve as models, and reinforce and support the acquisition and maintenance of eating and exercise behaviours. Family-based interventions are needed to modify these variables in treating obese children. We have made significant progress in developing interventions that target obese 8-12 year-old children, completing four 10-year follow-up studies that provide support for two factors that are useful in childhood obesity treatment. First, our research suggests that the direct involvement of at least one parent as an active participant in the weight loss process improves short- and long-term weight regulation. Second, our research suggests that increasing activity is important for maintenance of long-term weight control. Correlational analyses on the 10-year database suggest that family and friend support for behaviour change are related to long-term outcome. Family-based obesity treatment provides interventions for both children and their parents, but children benefit more from treatment than their parents. These positive results provide an encouraging basis for optimism that further development of interventions, based on newer research on family processes and behaviour changes, can be useful in treating childhood obesity.

Top