Sample records for alternative theoretical model

  1. An alternative theoretical model for an anomalous hollow beam.

    PubMed

    Cai, Yangjian; Wang, Zhaoying; Lin, Qiang

    2008-09-15

    An alternative and convenient theoretical model is proposed to describe a flexible anomalous hollow beam of elliptical symmetry with an elliptical solid core, which was observed in experiment recently (Phys. Rev. Lett, 94 (2005) 134802). In this model, the electric field of anomalous hollow beam is expressed as a finite sum of elliptical Gaussian modes. Flattopped beams, dark hollow beams and Gaussian beams are special cases of our model. Analytical propagation formulae for coherent and partially coherent anomalous hollow beams passing through astigmatic ABCD optical systems are derived. Some numerical examples are calculated to show the propagation and focusing properties of coherent and partially coherent anomalous hollow beams.

  2. Theoretical Models of Comprehension Skills Tested through a Comprehension Assessment Battery for Primary School Children

    ERIC Educational Resources Information Center

    Tobia, Valentina; Ciancaleoni, Matteo; Bonifacci, Paola

    2017-01-01

    In this study, two alternative theoretical models were compared, in order to analyze which of them best explains primary school children's text comprehension skills. The first one was based on the distinction between two types of answers requested by the comprehension test: local or global. The second model involved texts' input modality: written…

  3. Theoretical models of parental HIV disclosure: a critical review.

    PubMed

    Qiao, Shan; Li, Xiaoming; Stanton, Bonita

    2013-01-01

    This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.

  4. Probability model for analyzing fire management alternatives: theory and structure

    Treesearch

    Frederick W. Bratten

    1982-01-01

    A theoretical probability model has been developed for analyzing program alternatives in fire management. It includes submodels or modules for predicting probabilities of fire behavior, fire occurrence, fire suppression, effects of fire on land resources, and financial effects of fire. Generalized "fire management situations" are used to represent actual fire...

  5. Utilities and the Issue of Fairness in a Decision Theoretic Model for Selection

    ERIC Educational Resources Information Center

    Sawyer, Richard L.; And Others

    1976-01-01

    This article examines some of the values that might be considered in a selection situation within the context of a decision theoretic model also described here. Several alternate expressions of fair selection are suggested in the form of utility statements in which these values can be understood and compared. (Author/DEP)

  6. Providing Alternatives to Nursing Home Care: An Interorganizational Analysis.

    ERIC Educational Resources Information Center

    Austin, Carol D.

    The development of alternatives to institutionally based long-term care requires the creation of greater interdependence among community based agencies and the provision of appropriate interdependence as the foundation for coordinated service delivery. Theoretical models of interorganizational interdependence are examined and assessed for their…

  7. An examination of fuel particle heating during fire spread

    Treesearch

    Jack D. Cohen; Mark A. Finney

    2010-01-01

    Recent high intensity wildfires and our demonstrated inability to control extreme fire behavior suggest a need for alternative approaches for preventing wildfire disasters. Current fire spread models are not sufficiently based on a basic understanding of fire spread processes to provide more effective management alternatives. An experimental and theoretical approach...

  8. Conditional Versus Unconditional Procedures for Sample-Free Item Analysis

    ERIC Educational Resources Information Center

    Wright, Benjamin D.; Douglas, Graham A.

    1977-01-01

    Procedures for the Rasch model, sample free item calibration are reviewed and compared for accuracy. The theoretically ideal procedure is shown to have practical limitations. Two alternatives to the ideal are presented and discussed. A correction for bias in the most widely used alternative is presented. (Author/JKS)

  9. Teacher Agency in the Performance of Inquiry-Oriented Science Curriculum Reform

    ERIC Educational Resources Information Center

    Oliveira, Alandeom W.

    2012-01-01

    In this commentary, I consider several theoretical and analytical aspects of Tang Wee Teo and Margery Osborne's case study. I begin by identifying structuralist and cultural themes in Tang Wee and Margery's theoretical model of human activity. Next, I offer an alternative interpretation for Tang Wee and Margery's reported findings in terms of the…

  10. Reward Rate Optimization in Two-Alternative Decision Making: Empirical Tests of Theoretical Predictions

    ERIC Educational Resources Information Center

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D.

    2009-01-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response…

  11. Diagnosing and dealing with multicollinearity.

    PubMed

    Schroeder, M A

    1990-04-01

    The purpose of this article was to increase nurse researchers' awareness of the effects of collinear data in developing theoretical models for nursing practice. Collinear data distort the true value of the estimates generated from ordinary least-squares analysis. Theoretical models developed to provide the underpinnings of nursing practice need not be abandoned, however, because they fail to produce consistent estimates over repeated applications. It is also important to realize that multicollinearity is a data problem, not a problem associated with misspecification of a theorectical model. An investigator must first be aware of the problem, and then it is possible to develop an educated solution based on the degree of multicollinearity, theoretical considerations, and sources of error associated with alternative, biased, least-square regression techniques. Decisions based on theoretical and statistical considerations will further the development of theory-based nursing practice.

  12. A Universal Rank-Size Law

    PubMed Central

    2016-01-01

    A mere hyperbolic law, like the Zipf’s law power function, is often inadequate to describe rank-size relationships. An alternative theoretical distribution is proposed based on theoretical physics arguments starting from the Yule-Simon distribution. A modeling is proposed leading to a universal form. A theoretical suggestion for the “best (or optimal) distribution”, is provided through an entropy argument. The ranking of areas through the number of cities in various countries and some sport competition ranking serves for the present illustrations. PMID:27812192

  13. An Alternative to the Stay/Switch Equation Assessed When Using a Changeover-Delay

    PubMed Central

    MacDonall, James S.

    2015-01-01

    An alternative to the generalized matching equation for understanding concurrent performances is the stay/switch model. For the stay/switch model, the important events are the contingencies and behaviors at each alternative. The current experiment compares the descriptions by two stay/switch equations, the original, empirically derived stay/switch equation and a more theoretically derived equation based on ratios of stay to switch responses matching ratios of stay to switch reinforcers. The present experiment compared descriptions by the original stay/switch equation when using and not using a changeover delay. It also compared descriptions by the more theoretical equation with and without a changeover delay. Finally, it compared descriptions of the concurrent performances by these two equations. Rats were trained in 15 conditions on identical concurrent random-interval schedules in each component of a multiple schedule. A COD operated in only one component. There were no consistent differences in the variance accounted for by each equation of concurrent performances whether or not a COD was used. The simpler equation found greater sensitivity to stay than to switch reinforcers. It also found a COD eliminated the influence of switch reinforcers. Because estimates of parameters were more meaningful when using the more theoretical stay/switch equation it is preferred. PMID:26299548

  14. An alternative to the stay/switch equation assessed when using a changeover-delay.

    PubMed

    MacDonall, James S

    2015-11-01

    An alternative to the generalized matching equation for understanding concurrent performances is the stay/switch model. For the stay/switch model, the important events are the contingencies and behaviors at each alternative. The current experiment compares the descriptions by two stay/switch equations, the original, empirically derived stay/switch equation and a more theoretically derived equation based on ratios of stay to switch responses matching ratios of stay to switch reinforcers. The present experiment compared descriptions by the original stay/switch equation when using and not using a changeover delay. It also compared descriptions by the more theoretical equation with and without a changeover delay. Finally, it compared descriptions of the concurrent performances by these two equations. Rats were trained in 15 conditions on identical concurrent random-interval schedules in each component of a multiple schedule. A COD operated in only one component. There were no consistent differences in the variance accounted for by each equation of concurrent performances whether or not a COD was used. The simpler equation found greater sensitivity to stay than to switch reinforcers. It also found a COD eliminated the influence of switch reinforcers. Because estimates of parameters were more meaningful when using the more theoretical stay/switch equation it is preferred. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. A Behavioral Perspective of Childhood Trauma and Attachment Issues: Toward Alternative Treatment Approaches for Children with a History of Abuse

    ERIC Educational Resources Information Center

    Prather, Walter; Golden, Jeannie A.

    2009-01-01

    Attachment theory provides a useful conceptual framework for understanding trauma and the treatment of children who have been abused. This article examines childhood trauma and attachment issues from the perspective of behavior analysis, and provides a theoretical basis for two alternative treatment models for previously abused children and their…

  16. Numerical and theoretical evaluations of AC losses for single and infinite numbers of superconductor strips with direct and alternating transport currents in external AC magnetic field

    NASA Astrophysics Data System (ADS)

    Kajikawa, K.; Funaki, K.; Shikimachi, K.; Hirano, N.; Nagaya, S.

    2010-11-01

    AC losses in a superconductor strip are numerically evaluated by means of a finite element method formulated with a current vector potential. The expressions of AC losses in an infinite slab that corresponds to a simple model of infinitely stacked strips are also derived theoretically. It is assumed that the voltage-current characteristics of the superconductors are represented by Bean's critical state model. The typical operation pattern of a Superconducting Magnetic Energy Storage (SMES) coil with direct and alternating transport currents in an external AC magnetic field is taken into account as the electromagnetic environment for both the single strip and the infinite slab. By using the obtained results of AC losses, the influences of the transport currents on the total losses are discussed quantitatively.

  17. Detecting isotopic ratio outliers

    NASA Astrophysics Data System (ADS)

    Bayne, C. K.; Smith, D. H.

    An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers.

  18. Alignment-free sequence comparison (II): theoretical power of comparison statistics.

    PubMed

    Wan, Lin; Reinert, Gesine; Sun, Fengzhu; Waterman, Michael S

    2010-11-01

    Rapid methods for alignment-free sequence comparison make large-scale comparisons between sequences increasingly feasible. Here we study the power of the statistic D2, which counts the number of matching k-tuples between two sequences, as well as D2*, which uses centralized counts, and D2S, which is a self-standardized version, both from a theoretical viewpoint and numerically, providing an easy to use program. The power is assessed under two alternative hidden Markov models; the first one assumes that the two sequences share a common motif, whereas the second model is a pattern transfer model; the null model is that the two sequences are composed of independent and identically distributed letters and they are independent. Under the first alternative model, the means of the tuple counts in the individual sequences change, whereas under the second alternative model, the marginal means are the same as under the null model. Using the limit distributions of the count statistics under the null and the alternative models, we find that generally, asymptotically D2S has the largest power, followed by D2*, whereas the power of D2 can even be zero in some cases. In contrast, even for sequences of length 140,000 bp, in simulations D2* generally has the largest power. Under the first alternative model of a shared motif, the power of D2*approaches 100% when sufficiently many motifs are shared, and we recommend the use of D2* for such practical applications. Under the second alternative model of pattern transfer,the power for all three count statistics does not increase with sequence length when the sequence is sufficiently long, and hence none of the three statistics under consideration canbe recommended in such a situation. We illustrate the approach on 323 transcription factor binding motifs with length at most 10 from JASPAR CORE (October 12, 2009 version),verifying that D2* is generally more powerful than D2. The program to calculate the power of D2, D2* and D2S can be downloaded from http://meta.cmb.usc.edu/d2. Supplementary Material is available at www.liebertonline.com/cmb.

  19. Modelling public support for wildland fire policy

    Treesearch

    J.D. Absher; J.J. Vaske

    2007-01-01

    Theoretically grounded explanations of wildland fire policy can be improved by empirically documenting the causal influences of support for (or opposition to) management alternatives. This chapter proposes a model based on the specificity principle (i.e. correspondence between measured variables to empirically examine four common wildland fire policies in relation to...

  20. Essays on oil and business cycles in Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Aba Alkhail, Bandar A.

    This dissertation consists of three chapters. Chapter one presents a theoretical model using a dynamic stochastic general equilibrium (DSGE) approach to investigate the role of world oil prices in explaining the business cycle in Saudi Arabia. This model incorporates both productivity and oil revenue shocks. The results indicate that productivity shocks are relatively more important to business cycles than oil shocks. However, this model has some unfavorable features that are associated with both investment and labor hours. The second chapter presents a modified theoretical model using DSGE approach to examine the role of world oil prices versus productivity shocks in explaining the business cycles in Saudi Arabia. To overcome the unfavorable features of the baseline model, the alternative model adds friction to the model by incorporating investment portfolio adjustment cost. Thus, the alternative model produces similar dynamics to that of the baseline model but the unfavorable characteristics are eliminated. Also, this chapter conducts sensitivity analysis. The objective of the third chapter is to empirically investigate how real world oil price and productivity shocks affect output, consumption, investment, labor hours, and trade balance/output ratio for Saudi Arabia. This chapter complements the theoretical model of the previous chapters. In addition, this study builds a foundation for future studies in examining the impact of real world oil price shocks on the economies of key trade partners of Saudi Arabia. The results of the third chapter show that productivity shocks matter more for macroeconomic fluctuations than oil shocks for the Saudis' primary trade partners. Therefore, fears of oil importing countries appear to be overstated. As a whole, this research is important for the following reasons. First, the empirical model is consistent with the predictions of our theoretical model in that productivity is a driving force of business cycles in Saudi Arabia. Second, the policymakers in Saudi Arabia should be more concerned with increasing productivity through adopting new technologies that increase economic prosperity. Therefore, the policymakers should continue diversifying economic resources and reduce their reliance on oil.

  1. Predicting Commitment in Adult and Traditional-Age Students: Applying Rusbult's Investment Model to the Study of Retention.

    ERIC Educational Resources Information Center

    Cini, Marie A.; Fritz, Janie M. Harden

    Rusbult's Investment Model, a theoretical model of commitment based on notions of social exchange and interdependence theory, was used to predict college commitment in traditional-age and adult college students. A questionnaire assessing rewards, costs, investments, alternatives, and commitment to college was administered to 216 traditional-age…

  2. Cost shifting revisited: the case of service intensity.

    PubMed

    Friesner, Daniel L; Rosenman, Robert

    2002-02-01

    This paper examines whether a health care provider's choice of service intensity for any patient group affects its cost shifting behavior. Our theoretical models indicate that firms may respond to lower prospective payment by decreasing service intensity to all of its patient groups, thereby giving firms an alternative to cost shifting. Additionally, the conditions under which cost shifting and lower service intensity occur are identical, regardless of profit status. Using a panel of California hospitals, we found that nonprofit hospitals do cost shift, while profit-maximizing hospitals do not. However, both firms respond to lower prospective payment by decreasing service intensity, thus supporting our theoretical conclusion that lower service intensity can be used as an alternative to cost shifting.

  3. Animal Metacognition: A Tale of Two Comparative Psychologies

    PubMed Central

    Smith, J. David; Couchman, Justin J.; Beran, Michael J.

    2014-01-01

    A growing literature considers whether animals have capacities that are akin to human metacognition (i. e., humans’ capacity to monitor their states of uncertainty and knowing). Comparative psychologists have approached this question by testing a dolphin, pigeons, rats, monkeys and apes using perception, memory and food-concealment paradigms. As part of this consideration, some associative modelers have attempted to describe animals’ “metacognitive” performances in low-level, associative terms—an important goal if achievable. The authors summarize the empirical and theoretical situation regarding these associative descriptions. The associative descriptions in the animal-metacognition literature fail to encompass important phenomena. The sharp focus on abstract, mathematical associative models creates serious interpretative problems. The authors compare these failed associative descriptions to an alternative theoretical approach within contemporary comparative psychology. The alternative approach has the potential to strengthen comparative psychology as an empirical science and integrate it more fully within the mainstream of experimental psychology and cognitive science. PMID:23957740

  4. An alternative model for a partially coherent elliptical dark hollow beam

    NASA Astrophysics Data System (ADS)

    Li, Xu; Wang, Fei; Cai, Yangjian

    2011-04-01

    An alternative theoretical model named partially coherent hollow elliptical Gaussian beam (HEGB) is proposed to describe a partially coherent beam with an elliptical dark hollow profile. Explicit expression for the propagation factors of a partially coherent HEGB is derived. Based on the generalized Collins formula, analytical formulae for the cross-spectral density and mean-squared beam width of a partially coherent HEGB, propagating through a paraxial ABCD optical system, are derived. Propagation properties of a partially coherent HEGB in free space are studied as a numerical example.

  5. Large scale surface flow generation in driven suspensions of magnetic microparticles: Experiment, theoretical model and simulations

    NASA Astrophysics Data System (ADS)

    Belkin, Maxim; Snezhko, Alexey; Aranson, Igor

    2007-03-01

    Nontrivially ordered dynamic self-assembled snake-like structures are formed in an ensemble of magnetic microparticles suspended over a fluid surface and energized by an external alternating magnetic field. Formation and existence of such structures is always accompanied by flows which form vortices. These large-scale vortices can be very fast and are crucial for snake formation/destruction. We introduce theoretical model based on Ginzburg-Landau equation for parametrically excited surface waves coupled to conservation law for particle density and Navier-Stokes equation for water flows. The developed model successfully describes snake generation, accounts for flows and reproduces most experimental results observed.

  6. [Critical analysis of the immunological self/non-self model and of its implicit metaphysical foundations].

    PubMed

    Pradeu, Thomas; Carosella, Edgardo D

    2004-05-01

    An examination of the concepts used in immunology prompts us to wonder about the origins and the legitimacy of the notions of self and non-self, which constitute the core of the dominant theoretical model in this science. All theoretical reflection concerning immunology must aim at determining a criterion of immunogenicity, that is, an operational definition of the conditions in which an immune reaction occurs or does not occur. By criticizing both conceptually and experimentally the self/non-self vocabulary, we can demonstrate the inaccuracy and even the inadequacy of the dichotomy of self/non-self. Accordingly, the self/non-self model must be reexamined, or even rejected. On the basis of this critique, we can suggest an alternative theoretical hypothesis for immunology, based on the notion of continuity. The 'continuity hypothesis' developed here attempts to give a criterion of immunogenicity that avoids the reproaches leveled at the self model.

  7. The Association between Parent Early Adult Drug Use Disorder and Later Observed Parenting Practices and Child Behavior Problems: Testing Alternate Models

    ERIC Educational Resources Information Center

    Bailey, Jennifer A.; Hill, Karl G.; Guttmannova, Katarina; Oesterle, Sabrina; Hawkins, J. David; Catalano, Richard F.; McMahon, Robert J.

    2013-01-01

    This study tested the association between parent illicit drug use disorder (DUD) in early adulthood and observed parenting practices at ages 27-28 and examined the following 3 theoretically derived models explaining this link: (a) a disrupted parent adult functioning model,(b) a preexisting parent personality factor model, and (c) a disrupted…

  8. An Alternative Theoretical Model: Examining Psychosocial Identity Development of International Students in the United States

    ERIC Educational Resources Information Center

    Kim, Eunyoung

    2012-01-01

    Despite the plethora of college student identity development research, very little attention has been paid to the identity formation of international students. Rather than adopting existing identity theories in college student development, this exploratory qualitative study proposes a new psychosocial identity development model for international…

  9. Objectives of the Airline Firm: Theory

    NASA Technical Reports Server (NTRS)

    Kneafsey, J. T.

    1972-01-01

    Theoretical models are formulated for airline firm operations that revolve around alternative formulations of managerial goals which these firms are persuing in practice. Consideration is given to the different objective functions which the companies are following in lieu of profit maximization.

  10. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Gottwald, James A.; Bliss, Donald B.

    1990-01-01

    The focus is on a noise control method which considers aircraft fuselages lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. An interior noise reduction called alternate resonance tuning (ART) is described both theoretically and experimentally. Problems dealing with tuning single paneled wall structures for optimum noise reduction using the ART methodology are presented, and three theoretical problems are analyzed. The first analysis is a three dimensional, full acoustic solution for tuning a panel wall composed of repeating sections with four different panel tunings within that section, where the panels are modeled as idealized spring-mass-damper systems. The second analysis is a two dimensional, full acoustic solution for a panel geometry influenced by the effect of a propagating external pressure field such as that which might be associated with propeller passage by a fuselage. To reduce the analysis complexity, idealized spring-mass-damper panels are again employed. The final theoretical analysis presents the general four panel problem with real panel sections, where the effect of higher structural modes is discussed. Results from an experimental program highlight real applications of the ART concept and show the effectiveness of the tuning on real structures.

  11. Theoretical nozzle performance of a microwave electrothermal thruster using experimental data

    NASA Technical Reports Server (NTRS)

    Haraburda, Scott S.; Hawley, Martin C.

    1992-01-01

    Research aimed at developing a fundamental understanding of the plasma processes as applied to spacecraft propulsion is presented. Calorimetric, photographic, and spectrophotometric measurements based on the TM011 and TM012 modes in the resonance cavity have been performed. The efficiency of a thruster has been calculated using a theoretical model for predicting temperature, velocity, and species density within the propellant. It is concluded that the microwave electrothermal thruster is a viable alternative to electrode thrusters.

  12. A review of the solar array manufacturing industry costing standards

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The solar array manufacturing industry costing standards model is designed to compare the cost of producing solar arrays using alternative manufacturing processes. Constructive criticism of the methodology used is intended to enhance its implementation as a practical design tool. Three main elements of the procedure include workbook format and presentation, theoretical model validity and standard financial parameters.

  13. Alternative Methods for Assessing Mediation in Multilevel Data: The Advantages of Multilevel SEM

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Zhang, Zhen; Zyphur, Michael J.

    2011-01-01

    Multilevel modeling (MLM) is a popular way of assessing mediation effects with clustered data. Two important limitations of this approach have been identified in prior research and a theoretical rationale has been provided for why multilevel structural equation modeling (MSEM) should be preferred. However, to date, no empirical evidence of MSEM's…

  14. The Effect of Expected Value on Attraction Effect Preference Reversals

    PubMed Central

    Warren, Paul A.; El‐Deredy, Wael; Howes, Andrew

    2016-01-01

    Abstract The attraction effect shows that adding a third alternative to a choice set can alter preference between the original two options. For over 30 years, this simple demonstration of context dependence has been taken as strong evidence against a class of parsimonious value‐maximising models that evaluate alternatives independently from one another. Significantly, however, in previous demonstrations of the attraction effect alternatives are approximately equally valuable, so there was little consequence to the decision maker irrespective of which alternative was selected. Here we vary the difference in expected value between alternatives and provide the first demonstration that, although extinguished with large differences, this theoretically important effect persists when choice between alternatives has a consequence. We use this result to clarify the implications of the attraction effect, arguing that although it robustly violates the assumptions of value‐maximising models, it does not eliminate the possibility that human decision making is optimal. © 2016 The Authors Journal of Behavioral Decision Making Published by John Wiley & Sons Ltd. PMID:29081595

  15. The Effect of Expected Value on Attraction Effect Preference Reversals.

    PubMed

    Farmer, George D; Warren, Paul A; El-Deredy, Wael; Howes, Andrew

    2017-10-01

    The attraction effect shows that adding a third alternative to a choice set can alter preference between the original two options. For over 30 years, this simple demonstration of context dependence has been taken as strong evidence against a class of parsimonious value-maximising models that evaluate alternatives independently from one another. Significantly, however, in previous demonstrations of the attraction effect alternatives are approximately equally valuable, so there was little consequence to the decision maker irrespective of which alternative was selected. Here we vary the difference in expected value between alternatives and provide the first demonstration that, although extinguished with large differences, this theoretically important effect persists when choice between alternatives has a consequence. We use this result to clarify the implications of the attraction effect, arguing that although it robustly violates the assumptions of value-maximising models, it does not eliminate the possibility that human decision making is optimal. © 2016 The Authors Journal of Behavioral Decision Making Published by John Wiley & Sons Ltd.

  16. Self-Assembled Magnetic Surface Swimmers: Theoretical Model

    NASA Astrophysics Data System (ADS)

    Aranson, Igor; Belkin, Maxim; Snezhko, Alexey

    2009-03-01

    The mechanisms of self-propulsion of living microorganisms are a fascinating phenomenon attracting enormous attention in the physics community. A new type of self-assembled micro-swimmers, magnetic snakes, is an excellent tool to model locomotion in a simple table-top experiment. The snakes self-assemble from a dispersion of magnetic microparticles suspended on the liquid-air interface and subjected to an alternating magnetic field. Formation and dynamics of these swimmers are captured in the framework of theoretical model coupling paradigm equation for the amplitude of surface waves, conservation law for the density of particles, and the Navier-Stokes equation for hydrodynamic flows. The results of continuum modeling are supported by hybrid molecular dynamics simulations of magnetic particles floating on the surface of fluid.

  17. Network structure of production

    PubMed Central

    Atalay, Enghin; Hortaçsu, Ali; Roberts, James; Syverson, Chad

    2011-01-01

    Complex social networks have received increasing attention from researchers. Recent work has focused on mechanisms that produce scale-free networks. We theoretically and empirically characterize the buyer–supplier network of the US economy and find that purely scale-free models have trouble matching key attributes of the network. We construct an alternative model that incorporates realistic features of firms’ buyer–supplier relationships and estimate the model’s parameters using microdata on firms’ self-reported customers. This alternative framework is better able to match the attributes of the actual economic network and aids in further understanding several important economic phenomena. PMID:21402924

  18. Gravitational anti-screening as an alternative to dark matter

    NASA Astrophysics Data System (ADS)

    Penner, A. Raymond

    2016-04-01

    A semiclassical model of the screening of electric charge by virtual electric dipoles, as found in electrodynamic theory, will be presented. This model is then applied to the hypothetical case of an electric force where like charges attract. The resulting anti-screening of the electric charge is found to have the same functional dependence on the field source and observation distance that is found with the Baryonic Tully-Fisher Relationship. This leads to an anti-screening model for the gravitational force which is then used to determine the theoretical rotational curve of the Galaxy and the theoretical velocity dispersions and shear values for the Coma cluster. These theoretical results are found to be in good agreement with the corresponding astronomical observations. The screening of electric charge as found in QED and the larger apparent masses of galaxies and galactic clusters therefore appears to be two sides of the same coin.

  19. Evidence accumulation as a model for lexical selection.

    PubMed

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Psychological and behavioral consequences of job loss: a covariance structure analysis using Weiner's (1985) attribution model.

    PubMed

    Prussia, G E; Kinicki, A J; Bracker, J S

    1993-06-01

    B. Weiner's (1985) attribution model of achievement motivation and emotion was used as a theoretical foundation to examine the mediating processes between involuntary job loss and employment status. Seventy-nine manufacturing employees were surveyed 1 month prior to permanent displacement, and finding another job was assessed 18 months later. Covariance structure analysis was used to evaluate goodness of fit and to compare the model to alternative measurement and structural representations. Discriminant validity analyses indicated that the causal dimensions underlying the model were not independent. Model predictions were supported in that internal and stable attributions for job loss negatively influenced finding another job through expectations for re-employment. These predictions held up even after controlling for influential unmeasured variables. Practical and theoretical implications are discussed.

  1. The latent structure of personality functioning: Investigating criterion a from the alternative model for personality disorders in DSM-5.

    PubMed

    Zimmermann, Johannes; Böhnke, Jan R; Eschstruth, Rhea; Mathews, Alessa; Wenzel, Kristin; Leising, Daniel

    2015-08-01

    The alternative model for the classification of personality disorders (PD) in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) Section III comprises 2 major components: impairments in personality functioning (Criterion A) and maladaptive personality traits (Criterion B). In this study, we investigated the latent structure of Criterion A (a) within subdomains, (b) across subdomains, and (c) in conjunction with the Criterion B trait facets. Data were gathered as part of an online study that collected other-ratings by 515 laypersons and 145 therapists. Laypersons were asked to assess 1 of their personal acquaintances, whereas therapists were asked to assess 1 of their patients, using 135 items that captured features of Criteria A and B. We were able to show that (a) the structure within the Criterion A subdomains can be appropriately modeled using generalized graded unfolding models, with results suggesting that the items are indeed related to common underlying constructs but often deviate from their theoretically expected severity level; (b) the structure across subdomains is broadly in line with a model comprising 2 strongly correlated factors of self- and interpersonal functioning, with some notable deviations from the theoretical model; and (c) the joint structure of the Criterion A subdomains and the Criterion B facets broadly resembles the expected model of 2 plus 5 factors, albeit the loading pattern suggests that the distinction between Criteria A and B is somewhat blurry. Our findings provide support for several major assumptions of the alternative DSM-5 model for PD but also highlight aspects of the model that need to be further refined. (c) 2015 APA, all rights reserved).

  2. Household Energy Consumption: Community Context and the Fuelwood Transition*

    PubMed Central

    Link, Cynthia F.; Axinn, William G.; Ghimire, Dirgha J.

    2012-01-01

    We examine the influence of community context on change over time in households’ use of non-wood fuels. Our theoretical framework builds on sociological concepts in order to study energy consumption at the micro-level. The framework emphasizes the importance of nonfamily organizations and services in the local community as determinants of the transition from use of fuelwood to use of alternative fuels. We use multilevel longitudinal data on household fuel choice and community context from rural Nepal to provide empirical tests of our theoretical model. Results reveal that increased exposure to nonfamily organizations in the local community increases the use of alternative fuels. The findings illustrate key features of human impacts on the local environment and motivate greater incorporation of social organization into research on environmental change. PMID:23017795

  3. African American Female Offender's Use of Alternative and Traditional Health Services After Re-Entry: Examining the Behavioral Model for Vulnerable Populations.

    PubMed

    Oser, Carrie B; Bunting, Amanda M; Pullen, Erin; Stevens-Watkins, Danelle

    2016-01-01

    This is the first known study to use the Gelberg-Andersen Behavioral Model for Vulnerable Populations to predict African American women's use of three types of health services (alternative, hospitalization, and ambulatory) in the 18 months after release from prison. In the multivariate models, the most robust predictors of all three types of service utilization were in the vulnerable theoretical domains. Alternative health services were predicted by ethnic community membership, higher religiosity, and HIV/HCV. Hospitalizations were predicted by the lack of barriers to health care and disability. Ambulatory office visits were predicted by more experiences of gendered racism, a greater number of physical health problems, and HIV/HCV. Findings highlight the importance of cultural factors and HIV/HCV in obtaining both alternative and formal health care during community re-entry. Clinicians and policymakers should consider the salient role that the vulnerable domain plays in offender's accessing health services.

  4. African American Female Offender’s Use of Alternative and Traditional Health Services After Re-Entry: Examining the Behavioral Model for Vulnerable Populations

    PubMed Central

    Oser, Carrie B.; Bunting, Amanda M.; Pullen, Erin; Stevens-Watkins, Danelle

    2016-01-01

    This is the first known study to use the Gelberg-Andersen Behavioral Model for Vulnerable Populations to predict African American women’s use of three types of health services (alternative, hospitalization, and ambulatory) in the 18 months after release from prison. In the multivariate models, the most robust predictors of all three types of service utilization were in the vulnerable theoretical domains. Alternative health services were predicted by ethnic community membership, higher religiosity, and HIV/HCV. Hospitalizations were predicted by the lack of barriers to health care and disability. Ambulatory office visits were predicted by more experiences of gendered racism, a greater number of physical health problems, and HIV/HCV. Findings highlight the importance of cultural factors and HIV/HCV in obtaining both alternative and formal health care during community re-entry. Clinicians and policy makers should consider the salient role that the vulnerable domain plays in offender’s accessing health services. PMID:27133515

  5. Solid propellant rocket motor internal ballistics performance variation analysis, phase 3

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Foster, W. A., Jr.; Murph, J. E.; Adams, G. W., Jr.

    1977-01-01

    Results of research aimed at improving the predictability of off nominal internal ballistics performance of solid propellant rocket motors (SRMs) including thrust imbalance between two SRMs firing in parallel are reported. The potential effects of nozzle throat erosion on internal ballistic performance were studied and a propellant burning rate low postulated. The propellant burning rate model when coupled with the grain deformation model permits an excellent match between theoretical results and test data for the Titan IIIC, TU455.02, and the first Space Shuttle SRM (DM-1). Analysis of star grain deformation using an experimental model and a finite element model shows the star grain deformation effects for the Space Shuttle to be small in comparison to those of the circular perforated grain. An alternative technique was developed for predicting thrust imbalance without recourse to the Monte Carlo computer program. A scaling relationship used to relate theoretical results to test results may be applied to the alternative technique of predicting thrust imbalance or to the Monte Carlo evaluation. Extended investigation into the effect of strain rate on propellant burning rate leads to the conclusion that the thermoelastic effect is generally negligible for both steadily increasing pressure loads and oscillatory loads.

  6. Argumentation in Science Education: A Model-based Framework

    NASA Astrophysics Data System (ADS)

    Böttcher, Florian; Meisert, Anke

    2011-02-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.

  7. From the Big Bang to the Brain.

    ERIC Educational Resources Information Center

    Boliek, Carol A.; Lohmeier, Heather

    1999-01-01

    Summarizes research findings that challenge long-standing theories of infant cognition and motor development and proposes alternative theoretical models to describe skill acquisition during the first several years of life. Findings are discussed with respect to research in the area of infant speech physiology and production. (Author/CR)

  8. Simple, distance-dependent formulation of the Watts-Strogatz model for directed and undirected small-world networks.

    PubMed

    Song, H Francis; Wang, Xiao-Jing

    2014-12-01

    Small-world networks-complex networks characterized by a combination of high clustering and short path lengths-are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.

  9. Simple, distance-dependent formulation of the Watts-Strogatz model for directed and undirected small-world networks

    NASA Astrophysics Data System (ADS)

    Song, H. Francis; Wang, Xiao-Jing

    2014-12-01

    Small-world networks—complex networks characterized by a combination of high clustering and short path lengths—are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.

  10. A Multiple-Choice Task with Changes of Mind

    PubMed Central

    Albantakis, Larissa; Branzi, Francesca M.; Costa, Albert; Deco, Gustavo

    2012-01-01

    The role of changes of mind and multiple choices has recently received increased attention in the study of perceptual decision-making. Previously, these extensions to standard two-alternative tasks have been studied separately. Here we explored how changes of mind depend on the number of choice-alternatives. To this end, we tested 14 human subjects on a 2- and 4-alternative direction-discrimination task. Changes of mind in the participants' movement trajectories could be observed for two and for four choice alternatives. With fewer alternatives, participants responded faster and more accurately. The frequency of changes of mind, however, did not significantly differ for the different numbers of choice alternatives. Nevertheless, mind-changing improved the participants' final performance, particularly for intermediate difficulty levels, in both experimental conditions. Moreover, the mean reaction times of individual participants were negatively correlated with their overall tendency to make changes of mind. We further reproduced these findings with a multi-alternative attractor model for decision-making, while a simple race model could not account for the experimental data. Our experiment, combined with the theoretical models allowed us to shed light on: (1) the differences in choice behavior between two and four alternatives, (2) the differences between the data of our human subjects and previous monkey data, (3) individual differences between participants, and (4) the inhibitory interaction between neural representations of choice alternatives. PMID:22916216

  11. Alternating-Offers Protocol for Multi-issue Bilateral Negotiation in Semantic-Enabled Marketplaces

    NASA Astrophysics Data System (ADS)

    Ragone, Azzurra; di Noia, Tommaso; di Sciascio, Eugenio; Donini, Francesco M.

    We present a semantic-based approach to multi-issue bilateral negotiation for e-commerce. We use Description Logics to model advertisements, and relations among issues as axioms in a TBox. We then introduce a logic-based alternating-offers protocol, able to handle conflicting information, that merges non-standard reasoning services in Description Logics with utility thoery to find the most suitable agreements. We illustrate and motivate the theoretical framework, the logical language, and the negotiation protocol.

  12. Task inhibition, conflict, and the n-2 repetition cost: A combined computational and empirical approach.

    PubMed

    Sexton, Nicholas J; Cooper, Richard P

    2017-05-01

    Task inhibition (also known as backward inhibition) is an hypothesised form of cognitive inhibition evident in multi-task situations, with the role of facilitating switching between multiple, competing tasks. This article presents a novel cognitive computational model of a backward inhibition mechanism. By combining aspects of previous cognitive models in task switching and conflict monitoring, the model instantiates the theoretical proposal that backward inhibition is the direct result of conflict between multiple task representations. In a first simulation, we demonstrate that the model produces two effects widely observed in the empirical literature, specifically, reaction time costs for both (n-1) task switches and n-2 task repeats. Through a systematic search of parameter space, we demonstrate that these effects are a general property of the model's theoretical content, and not specific parameter settings. We further demonstrate that the model captures previously reported empirical effects of inter-trial interval on n-2 switch costs. A final simulation extends the paradigm of switching between tasks of asymmetric difficulty to three tasks, and generates novel predictions for n-2 repetition costs. Specifically, the model predicts that n-2 repetition costs associated with hard-easy-hard alternations are greater than for easy-hard-easy alternations. Finally, we report two behavioural experiments testing this hypothesis, with results consistent with the model predictions. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Neural and hybrid modeling: an alternative route to efficiently predict the behavior of biotechnological processes aimed at biofuels obtainment.

    PubMed

    Curcio, Stefano; Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele

    2014-01-01

    The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  14. Neural and Hybrid Modeling: An Alternative Route to Efficiently Predict the Behavior of Biotechnological Processes Aimed at Biofuels Obtainment

    PubMed Central

    Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele

    2014-01-01

    The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved. PMID:24516363

  15. Theoretical Comparison of Fixed Route Bus and Flexible Route Subscription Bus Feeder Service in Low Density Areas

    DOT National Transportation Integrated Search

    1975-03-01

    parametric variation of demand density was used to compare service level and cost of two alternative systems for providing low density feeder service. Supply models for fixed route and flexible route service were developed and applied to determine ra...

  16. Learning from Experience. Empowerment or Incorporation?

    ERIC Educational Resources Information Center

    Fraser, Wilma

    Based on a Making Experience Count (MEC) project, this book examines current trends in learning from experience. Chapter 1 discusses key theoretical elements that underpin work in the field of experiential learning and analyzes the contribution of the andragogic approach to adult learning. Chapter 2 offers an alternative model--gynagogy--and…

  17. Theoretical and methodological issues with testing the SCCT and RIASEC models: Comment on Lent, Sheu, and Brown (2010) and Lubinski (2010).

    PubMed

    Armstrong, Patrick Ian; Vogel, David L

    2010-04-01

    The current article replies to comments made by Lent, Sheu, and Brown (2010) and Lubinski (2010) regarding the study "Interpreting the Interest-Efficacy Association From a RIASEC Perspective" (Armstrong & Vogel, 2009). The comments made by Lent et al. and Lubinski highlight a number of important theoretical and methodological issues, including the process of defining and differentiating between constructs, the assumptions underlying Holland's (1959, 1997) RIASEC (Realistic, Investigative, Artistic, Social, Enterprising, and Conventional types) model and interrelations among constructs specified in social cognitive career theory (SCCT), the importance of incremental validity for evaluating constructs, and methodological considerations when quantifying interest-efficacy correlations and for comparing models using multivariate statistical methods. On the basis of these comments and previous research on the SCCT and Holland models, we highlight the importance of considering multiple theoretical perspectives in vocational research and practice. Alternative structural models are outlined for examining the role of interests, self-efficacy, learning experiences, outcome expectations, personality, and cognitive abilities in the career choice and development process. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  18. Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty

    NASA Astrophysics Data System (ADS)

    Brown, C.; Lall, U.; Siegfried, T.

    2005-12-01

    Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of decision-making a priori to hydrologic realizations and those made a posteriori on alternative allocation mechanisms. Outcomes are evaluated in terms of water productivity, net social benefit and equity. The performance of hydro-climate prediction modeling in each allocation mechanism will be assessed. Finally, year-to-year system performance and feedback pathways are explored. In this way, the system can be adaptively managed toward equitable and efficient water use.

  19. An econometric model of the U.S. secondary copper industry: Recycling versus disposal

    USGS Publications Warehouse

    Slade, M.E.

    1980-01-01

    In this paper, a theoretical model of secondary recovery is developed that integrates microeconomic theories of production and cost with a dynamic model of scrap generation and accumulation. The model equations are estimated for the U.S. secondary copper industry and used to assess the impacts that various policies and future events have on copper recycling rates. The alternatives considered are: subsidies for secondary production, differing energy costs, and varying ore quality in primary production. ?? 1990.

  20. The Role of Government Policies in the Adoption of Conservation Tillage in China: A Theoretical Model

    NASA Astrophysics Data System (ADS)

    Ding, Ya

    2018-01-01

    In recent years, many areas of China have been facing increasing problems of soil erosion and land degradation. Conservation tillage, with both economic and ecological benefits, provides a good avenue for Chinese farmers to conserve land as well as secure food production. However, the adoption rate of conservation tillage systems is very low in China. In this paper, the author constructs a theoretical model to explain a farmer’s adoption decision of conservation tillage. The goal is to investigate potential reasons behind the low adoption rate and explores alternative policy tools that can help improve a farmer’s incentive to adopt conservation tillage in China.

  1. A diagnosis of conflict: theoretical barriers to integration in mental health services & their philosophical undercurrents

    PubMed Central

    2010-01-01

    This paper examines the philosophical substructure to the theoretical conflicts that permeate contemporary mental health care in the UK. Theoretical conflicts are treated here as those that arise among practitioners holding divergent theoretical orientations towards the phenomena being treated. Such conflicts, although steeped in history, have become revitalized by recent attempts at integrating mental health services that have forced diversely trained practitioners to work collaboratively together, often under one roof. Part I of this paper examines how the history of these conflicts can be understood as a tension between, on the one hand, the medical model and its use by the dominant profession of psychiatry, and on the other, those alternative models and practitioners in some way differentiated from the medical model camp. Examples will be given from recent policy and research to highlight the prevalence of this tension in contemporary practice. Part II of this paper explores the deeper commonalities that lay beneath the theoretical conflict outlined in Part I. These commonalities will be shown to be apart of a captivating framework that has continued to grip the conflict since its inception. By exposing this underlying framework--and the motivations inherent therein--the topic of integration appears in wholly different light, allowing a renewed philosophical basis for integration to emerge. PMID:20132546

  2. Conceptual Commitments of the LIDA Model of Cognition

    NASA Astrophysics Data System (ADS)

    Franklin, Stan; Strain, Steve; McCall, Ryan; Baars, Bernard

    2013-06-01

    Significant debate on fundamental issues remains in the subfields of cognitive science, including perception, memory, attention, action selection, learning, and others. Psychology, neuroscience, and artificial intelligence each contribute alternative and sometimes conflicting perspectives on the supervening problem of artificial general intelligence (AGI). Current efforts toward a broad-based, systems-level model of minds cannot await theoretical convergence in each of the relevant subfields. Such work therefore requires the formulation of tentative hypotheses, based on current knowledge, that serve to connect cognitive functions into a theoretical framework for the study of the mind. We term such hypotheses "conceptual commitments" and describe the hypotheses underlying one such model, the Learning Intelligent Distribution Agent (LIDA) Model. Our intention is to initiate a discussion among AGI researchers about which conceptual commitments are essential, or particularly useful, toward creating AGI agents.

  3. Thinking meta-theoretically about the role of internalization in the development of body dissatisfaction and body change behaviors.

    PubMed

    Karazsia, Bryan T; van Dulmen, Manfred H M; Wong, Kendal; Crowther, Janis H

    2013-09-01

    Internalization of societal standards of physical attractiveness (i.e., internalization of the thin ideal for women and internalization of the mesomorphic ideal for men) is a widely studied and robust risk factor for body dissatisfaction and maladaptive body change behaviors. Substantial empirical research supports internalization as both a mediator and a moderator of the relation between societal influences and body dissatisfaction. In this paper, a primer on mediation and moderation is followed by a review of literature and discussion of the extent to which internalization can theoretically fulfill the roles of both mediation and moderation. The literature review revealed a stark contrast in research design (experimental versus non-experimental design) when alternate conceptualizations of internalization are adopted. A meta-theoretical, moderated mediation model is presented. This model integrates previous research and can inform future empirical and clinical endeavors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Fatigue-life distributions for reaction time data.

    PubMed

    Tejo, Mauricio; Niklitschek-Soto, Sebastián; Marmolejo-Ramos, Fernando

    2018-06-01

    The family of fatigue-life distributions is introduced as an alternative model of reaction time data. This family includes the shifted Wald distribution and a shifted version of the Birnbaum-Saunders distribution. Although the former has been proposed as a way to model reaction time data, the latter has not. Hence, we provide theoretical, mathematical and practical arguments in support of the shifted Birnbaum-Saunders as a suitable model of simple reaction times and associated cognitive mechanisms.

  5. The Wheel of Writing: A Model of the Writing Domain for the Teaching and Assessing of Writing as a Key Competency

    ERIC Educational Resources Information Center

    Berge, Kjell Lars; Evensen, Lars Sigfred; Thygesen, Ragnar

    2016-01-01

    The model presented in this article aspires to represent a theoretically valid and coherent definition and description of writing, as a basis for teaching and assessing writing as a key competency in school. It represents a critique as well as an extension of previous alternatives in that it views writing as a culturally and individually…

  6. Nominalization and Alternations in Biomedical Language

    PubMed Central

    Cohen, K. Bretonnel; Palmer, Martha; Hunter, Lawrence

    2008-01-01

    Background This paper presents data on alternations in the argument structure of common domain-specific verbs and their associated verbal nominalizations in the PennBioIE corpus. Alternation is the term in theoretical linguistics for variations in the surface syntactic form of verbs, e.g. the different forms of stimulate in FSH stimulates follicular development and follicular development is stimulated by FSH. The data is used to assess the implications of alternations for biomedical text mining systems and to test the fit of the sublanguage model to biomedical texts. Methodology/Principal Findings We examined 1,872 tokens of the ten most common domain-specific verbs or their zero-related nouns in the PennBioIE corpus and labelled them for the presence or absence of three alternations. We then annotated the arguments of 746 tokens of the nominalizations related to these verbs and counted alternations related to the presence or absence of arguments and to the syntactic position of non-absent arguments. We found that alternations are quite common both for verbs and for nominalizations. We also found a previously undescribed alternation involving an adjectival present participle. Conclusions/Significance We found that even in this semantically restricted domain, alternations are quite common, and alternations involving nominalizations are exceptionally diverse. Nonetheless, the sublanguage model applies to biomedical language. We also report on a previously undescribed alternation involving an adjectival present participle. PMID:18779866

  7. Quantitative Differences in Retest Effects across Different Methods Used to Construct Alternate Test Forms

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2013-01-01

    Allowing respondents to retake a cognitive ability test has shown to increase their test scores. Several theoretical models have been proposed to explain this effect, which make distinct assumptions regarding the measurement invariance of psychometric tests across test administration sessions with regard to narrower cognitive abilities and general…

  8. Alternative models of recreational off-highway vehicle site demand

    Treesearch

    Jeffrey Englin; Thomas Holmes; Rebecca Niell

    2006-01-01

    A controversial recreation activity is off-highway vehicle use. Off-highway vehicle use is controversial because it is incompatible with most other activities and is extremely hard on natural eco-systems. This study estimates utility theoretic incomplete demand systems for four off-highway vehicle sites. Since two sets of restrictions are equally consistent with...

  9. A Theoretical Model for Estimation of Yield Strength of Fiber Metal Laminate

    NASA Astrophysics Data System (ADS)

    Bhat, Sunil; Nagesh, Suresh; Umesh, C. K.; Narayanan, S.

    2017-08-01

    The paper presents a theoretical model for estimation of yield strength of fiber metal laminate. Principles of elasticity and formulation of residual stress are employed to determine the stress state in metal layer of the laminate that is found to be higher than the stress applied over the laminate resulting in reduced yield strength of the laminate in comparison with that of the metal layer. The model is tested over 4A-3/2 Glare laminate comprising three thin aerospace 2014-T6 aluminum alloy layers alternately bonded adhesively with two prepregs, each prepreg built up of three uni-directional glass fiber layers laid in longitudinal and transverse directions. Laminates with prepregs of E-Glass and S-Glass fibers are investigated separately under uni-axial tension. Yield strengths of both the Glare variants are found to be less than that of aluminum alloy with use of S-Glass fiber resulting in higher laminate yield strength than with the use of E-Glass fiber. Results from finite element analysis and tensile tests conducted over the laminates substantiate the theoretical model.

  10. Portfolio theory and the alternative decision rule of cost-effectiveness analysis: theoretical and practical considerations.

    PubMed

    Sendi, Pedram; Al, Maiwenn J; Gafni, Amiram; Birch, Stephen

    2004-05-01

    Bridges and Terris (Soc. Sci. Med. (2004)) critique our paper on the alternative decision rule of economic evaluation in the presence of uncertainty and constrained resources within the context of a portfolio of health care programs (Sendi et al. Soc. Sci. Med. 57 (2003) 2207). They argue that by not adopting a formal portfolio theory approach we overlook the optimal solution. We show that these arguments stem from a fundamental misunderstanding of the alternative decision rule of economic evaluation. In particular, the portfolio theory approach advocated by Bridges and Terris is based on the same theoretical assumptions that the alternative decision rule set out to relax. Moreover, Bridges and Terris acknowledge that the proposed portfolio theory approach may not identify the optimal solution to resource allocation problems. Hence, it provides neither theoretical nor practical improvements to the proposed alternative decision rule.

  11. A theoretical approach to medication adherence for children and youth with psychiatric disorders.

    PubMed

    Charach, Alice; Volpe, Tiziana; Boydell, Katherine M; Gearing, Robin E

    2008-01-01

    This article provides a theoretical review of treatment adherence for children and youth with psychiatric disorders where pharmacological agents are first-line interventions. Four empirically based models of health behavior are reviewed and applied to the sparse literature about medication adherence for children with attention-deficit/hyperactivity disorder and young people with first-episode psychosis. Three qualitative studies of medication use are summarized, and details from the first-person narratives are used to illustrate the theoretical models. These studies indicate, when taken together, that the clinical approach to addressing poor medication adherence in children and youth with psychiatric disorders should be guided by more than one theoretical model. Mental health experts should clarify beliefs, address misconceptions, and support exploration of alternative treatment options unless contraindicated. Recognizing the larger context of the family, allowing time for parents and children to change their attitudes, and offering opportunities for easy access to medication in the future are important ways of respecting patient preferences, while steering them toward best-evidence interventions. Future research using qualitative methods of inquiry to investigate parent, child, and youth experiences of mental health interventions should identify effective ways to improve treatment adherence.

  12. Beyond Λ CDM: Problems, solutions, and the road ahead

    NASA Astrophysics Data System (ADS)

    Bull, Philip; Akrami, Yashar; Adamek, Julian; Baker, Tessa; Bellini, Emilio; Beltrán Jiménez, Jose; Bentivegna, Eloisa; Camera, Stefano; Clesse, Sébastien; Davis, Jonathan H.; Di Dio, Enea; Enander, Jonas; Heavens, Alan; Heisenberg, Lavinia; Hu, Bin; Llinares, Claudio; Maartens, Roy; Mörtsell, Edvard; Nadathur, Seshadri; Noller, Johannes; Pasechnik, Roman; Pawlowski, Marcel S.; Pereira, Thiago S.; Quartin, Miguel; Ricciardone, Angelo; Riemer-Sørensen, Signe; Rinaldi, Massimiliano; Sakstein, Jeremy; Saltas, Ippocratis D.; Salzano, Vincenzo; Sawicki, Ignacy; Solomon, Adam R.; Spolyar, Douglas; Starkman, Glenn D.; Steer, Danièle; Tereno, Ismael; Verde, Licia; Villaescusa-Navarro, Francisco; von Strauss, Mikael; Winther, Hans A.

    2016-06-01

    Despite its continued observational successes, there is a persistent (and growing) interest in extending cosmology beyond the standard model, Λ CDM. This is motivated by a range of apparently serious theoretical issues, involving such questions as the cosmological constant problem, the particle nature of dark matter, the validity of general relativity on large scales, the existence of anomalies in the CMB and on small scales, and the predictivity and testability of the inflationary paradigm. In this paper, we summarize the current status of Λ CDM as a physical theory, and review investigations into possible alternatives along a number of different lines, with a particular focus on highlighting the most promising directions. While the fundamental problems are proving reluctant to yield, the study of alternative cosmologies has led to considerable progress, with much more to come if hopes about forthcoming high-precision observations and new theoretical ideas are fulfilled.

  13. The use of neurocomputational models as alternatives to animal models in the development of electrical brain stimulation treatments.

    PubMed

    Beuter, Anne

    2017-05-01

    Recent publications call for more animal models to be used and more experiments to be performed, in order to better understand the mechanisms of neurodegenerative disorders, to improve human health, and to develop new brain stimulation treatments. In response to these calls, some limitations of the current animal models are examined by using Deep Brain Stimulation (DBS) in Parkinson's disease as an illustrative example. Without focusing on the arguments for or against animal experimentation, or on the history of DBS, the present paper argues that given recent technological and theoretical advances, the time has come to consider bioinspired computational modelling as a valid alternative to animal models, in order to design the next generation of human brain stimulation treatments. However, before computational neuroscience is fully integrated in the translational process and used as a substitute for animal models, several obstacles need to be overcome. These obstacles are examined in the context of institutional, financial, technological and behavioural lock-in. Recommendations include encouraging agreement to change long-term habitual practices, explaining what alternative models can achieve, considering economic stakes, simplifying administrative and regulatory constraints, and carefully examining possible conflicts of interest. 2017 FRAME.

  14. The Alternative Peer Group: A Developmentally Appropriate Recovery Support Model for Adolescents.

    PubMed

    Nash, Angela; Collier, Crystal

    2016-01-01

    Recovery as the goal for substance use disorder treatment has been a key component of the Substance Abuse and Mental Health Services Administration's mission for the past decade. Consistent with their mission, there is a call for research and development of recovery-oriented systems of care to support affected individuals through all stages of the recovery process. Evidence is emerging to support recovery practice and research for adults, but recovery-oriented models for adolescents are scant. The Alternative Peer Group (APG) is a comprehensive adolescent recovery support model that integrates recovering peers and prosocial activities into evidence-based clinical practice. Employing APG participants' own words, this article will describe the essential elements and three theoretical frameworks underlying the APG model to illustrate how the APG serves as a developmentally appropriate recovery support service for adolescents with substance use disorder.

  15. Non-standard models and the sociology of cosmology

    NASA Astrophysics Data System (ADS)

    López-Corredoira, Martín

    2014-05-01

    I review some theoretical ideas in cosmology different from the standard "Big Bang": the quasi-steady state model, the plasma cosmology model, non-cosmological redshifts, alternatives to non-baryonic dark matter and/or dark energy, and others. Cosmologists do not usually work within the framework of alternative cosmologies because they feel that these are not at present as competitive as the standard model. Certainly, they are not so developed, and they are not so developed because cosmologists do not work on them. It is a vicious circle. The fact that most cosmologists do not pay them any attention and only dedicate their research time to the standard model is to a great extent due to a sociological phenomenon (the "snowball effect" or "groupthink"). We might well wonder whether cosmology, our knowledge of the Universe as a whole, is a science like other fields of physics or a predominant ideology.

  16. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  17. Unique heating curves generated by radiofrequency electric-field interactions with semi-aqueous solutions

    NASA Astrophysics Data System (ADS)

    Lara, Nadia C.; Haider, Asad A.; Wilson, Lon J.; Curley, Steven A.; Corr, Stuart J.

    2017-01-01

    Aqueous and nanoparticle-based solutions have been reported to heat when exposed to an alternating radiofrequency (RF) electric-field. Although the theoretical models have been developed to accurately model such a behavior given the solution composition as well as the geometrical constraints of the sample holder, these models have not been investigated across a wide-range of solutions where the dielectric properties differ, especially with regard to the real permittivity. In this work, we investigate the RF heating properties of non-aqueous solutions composed of ethanol, propylene glycol, and glycine betaine with and without varying amounts of NaCl and LiCl. This allowed us to modulate the real permittivity across the range 25-132, as well as the imaginary permittivity across the range 37-177. Our results are in excellent agreement with the previously developed theoretical models. We have shown that different materials generate unique RF heating curves that differ from the standard aqueous heating curves. The theoretical model previously described is robust and accounts for the RF heating behavior of materials with a variety of dielectric properties, which may provide applications in non-invasive RF cancer hyperthermia.

  18. A Practical Approach to Address Uncertainty in Stakeholder Deliberations.

    PubMed

    Gregory, Robin; Keeney, Ralph L

    2017-03-01

    This article addresses the difficulties of incorporating uncertainty about consequence estimates as part of stakeholder deliberations involving multiple alternatives. Although every prediction of future consequences necessarily involves uncertainty, a large gap exists between common practices for addressing uncertainty in stakeholder deliberations and the procedures of prescriptive decision-aiding models advanced by risk and decision analysts. We review the treatment of uncertainty at four main phases of the deliberative process: with experts asked to describe possible consequences of competing alternatives, with stakeholders who function both as individuals and as members of coalitions, with the stakeholder committee composed of all stakeholders, and with decisionmakers. We develop and recommend a model that uses certainty equivalents as a theoretically robust and practical approach for helping diverse stakeholders to incorporate uncertainties when evaluating multiple-objective alternatives as part of public policy decisions. © 2017 Society for Risk Analysis.

  19. Elevated carbon dioxide is predicted to promote coexistence among competing species in a trait-based model

    DOE PAGES

    Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...

    2015-10-06

    Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less

  20. Role of egg predation by haddock in the decline of an Atlantic herring population

    PubMed Central

    Richardson, David E.; Hare, Jonathan A.; Fogarty, Michael J.; Link, Jason S.

    2011-01-01

    Theoretical studies suggest that the abrupt and substantial changes in the productivity of some fisheries species may be explained by predation-driven alternate stable states in their population levels. With this hypothesis, an increase in fishing or a natural perturbation can drive a population from an upper to a lower stable-equilibrium population level. After fishing is reduced or the perturbation ended, this low population level can persist due to the regulatory effect of the predator. Although established in theoretical studies, there is limited empirical support for predation-driven alternate stable states in exploited marine fish populations. We present evidence that egg predation by haddock (Melanogrammus aeglefinus) can cause alternate stable population levels in Georges Bank Atlantic herring (Clupea harengus). Egg predation by haddock explains a substantial decoupling of herring spawning stock biomass (an index of egg production) from observed larval herring abundance (an index of egg hatching). Estimated egg survival rates ranged from <2–70% from 1971 to 2005. A population model incorporating egg predation and herring fishing explains the major population trends of Georges Bank herring over four decades and predicts that, when the haddock population is high, seemingly conservative levels of fishing can still precipitate a severe decline in the herring population. These findings illustrate how efforts to rebuild fisheries can be undermined by not incorporating ecological interactions into fisheries models and management plans. PMID:21825166

  1. Evaluating model structure adequacy: The case of the Maggia Valley groundwater system, southern Switzerland

    USGS Publications Warehouse

    Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,

    2013-01-01

    Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.

  2. Answering the Questions of Rape Prevention Research: A Response to Tharp et al. (2011)

    ERIC Educational Resources Information Center

    Foubert, John D.

    2011-01-01

    Rape prevention programmers and researchers have long struggled to select the most appropriate theoretical models to frame their work. Questions abound regarding appropriate standards of evidence for success of program interventions. The present article provides an alternative point of view to the one put forward by seven staff members from the…

  3. The Demand for Higher Education in Michigan: Projections to the Year 2000.

    ERIC Educational Resources Information Center

    Moor, James R., Jr.; And Others

    Using data from the 1960-1977 period, this study provides a range of headcount enrollment projections for the Michigan higher education system to the year 2000 by type of institution and by age and sex of student under alternative sets of projection assumptions. The theoretical framework, methodology, and working model developed in this study are…

  4. A Theoretical Model of Segmented Youth Labor Markets and the School to Work Transition.

    ERIC Educational Resources Information Center

    Vrooman, John

    Recurring evidence that workers with similar skills do not necessarily earn the same wages led to the formulation of an alternative to the conventional market theory, namely, the segmented market theory. This theory posits that certain skills are distributed not among prospective employees but among jobs, in relation to the technology of those…

  5. Alternative Theoretical Bases for the Study of Human Communication: The Rules Perspective.

    ERIC Educational Resources Information Center

    Cushman, Donald P.

    Three potentially useful perspectives for the scientific development of human communication theory are the law model, the systems approach, and the rules paradigm. It is the purpose of this paper to indicate the utility of the rules perspective. For the purposes of this analysis, human communication is viewed as the successful transfer of symbolic…

  6. An experimental and theoretical evaluation of increased thermal diffusivity phase change devices

    NASA Technical Reports Server (NTRS)

    White, S. P.; Golden, J. O.; Stermole, F. J.

    1972-01-01

    This study was to experimentally evaluate and mathematically model the performance of phase change thermal control devices containing high thermal conductivity metal matrices. Three aluminum honeycomb filters were evaluated at five different heat flux levels using n-oct-adecane as the test material. The system was mathematically modeled by approximating the partial differential equations with a three-dimensional implicit alternating direction technique. The mathematical model predicts the system quite well. All of the phase change times are predicted. The heating of solid phase is predicted exactly while there is some variation between theoretical and experimental results in the liquid phase. This variation in the liquid phase could be accounted for by the fact that there are some heat losses in the cell and there could be some convection in the experimental system.

  7. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.

    PubMed

    Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul

    2013-10-01

    Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Theoretical Noise Analysis on a Position-sensitive Metallic Magnetic Calorimeter

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2007-01-01

    We report on the theoretical noise analysis for a position-sensitive Metallic Magnetic Calorimeter (MMC), consisting of MMC read-out at both ends of a large X-ray absorber. Such devices are under consideration as alternatives to other cryogenic technologies for future X-ray astronomy missions. We use a finite-element model (FEM) to numerically calculate the signal and noise response at the detector outputs and investigate the correlations between the noise measured at each MMC coupled by the absorber. We then calculate, using the optimal filter concept, the theoretical energy and position resolution across the detector and discuss the trade-offs involved in optimizing the detector design for energy resolution, position resolution and count rate. The results show, theoretically, the position-sensitive MMC concept offers impressive spectral and spatial resolving capabilities compared to pixel arrays and similar position-sensitive cryogenic technologies using Transition Edge Sensor (TES) read-out.

  9. Sleep dynamics: A self-organized critical system

    NASA Astrophysics Data System (ADS)

    Comte, J. C.; Ravassard, P.; Salin, P. A.

    2006-05-01

    In psychiatric and neurological diseases, sleep is often perturbed. Moreover, recent works on humans and animals tend to show that sleep plays a strong role in memory processes. Reciprocally, sleep dynamics following a learning task is modified [Hubert , Nature (London) 02663, 1 (2004), Peigneux , Neuron 44, 535 (2004)]. However, sleep analysis in humans and animals is often limited to the total sleep and wake duration quantification. These two parameters are not fully able to characterize the sleep dynamics. In mammals sleep presents a complex organization with an alternation of slow wave sleep (SWS) and paradoxical sleep (PS) episodes. Moreover, it has been shown recently that these sleep episodes are frequently interrupted by micro-arousal (without awakening). We present here a detailed analysis of the basal sleep properties emerging from the mechanisms underlying the vigilance states alternation in an animal model. These properties present a self-organized critical system signature and reveal the existence of two W, two SWS, and a PS structure exhibiting a criticality as met in sand piles. We propose a theoretical model of the sleep dynamics based on several interacting neuronal populations. This new model of sleep dynamics presents the same properties as experimentally observed, and explains the variability of the collected data. This experimental and theoretical study suggests that sleep dynamics shares several common features with critical systems.

  10. "Machine" consciousness and "artificial" thought: an operational architectonics model guided approach.

    PubMed

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H

    2012-01-05

    Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. The composition of heterogeneous control laws

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Astrom, Karl

    1991-01-01

    The fuzzy control literature and industrial practice provide certain nonlinear methods for combining heterogeneous control laws, but these methods have been very difficult to analyze theoretically. An alternate formulation and extension of this approach is presented that has several practical and theoretical benefits. An example of heterogeneous control is given and two alternate analysis methods are presented.

  12. Health Insurance: The Trade-Off Between Risk Pooling and Moral Hazard.

    DTIC Science & Technology

    1989-12-01

    bias comes about because we suppress the intercept term in estimating VFor the power, the test is against 1, - 1. With this transform, the risk...dealing with the same utility function. As one test of whether families behave in the way economic theory suggests, we have also fitted a probit model of...nonparametric alternative to test our results’ sensitivity to the assumption of a normal error in both the theoretical and empirical models of the

  13. Theoretical study of the dynamic magnetic response of ferrofluid to static and alternating magnetic fields

    NASA Astrophysics Data System (ADS)

    Batrudinov, Timur M.; Ambarov, Alexander V.; Elfimova, Ekaterina A.; Zverev, Vladimir S.; Ivanov, Alexey O.

    2017-06-01

    The dynamic magnetic response of ferrofluid in a static uniform external magnetic field to a weak, linear polarized, alternating magnetic field is investigated theoretically. The ferrofluid is modeled as a system of dipolar hard spheres, suspended in a long cylindrical tube whose long axis is parallel to the direction of the static and alternating magnetic fields. The theory is based on the Fokker-Planck-Brown equation formulated for the case when the both static and alternating magnetic fields are applied. The solution of the Fokker-Planck-Brown equation describing the orientational probability density of a randomly chosen dipolar particle is expressed as a series in terms of the spherical Legendre polynomials. The obtained analytical expression connecting three neighboring coefficients of the series makes possible to determine the probability density with any order of accuracy in terms of Legendre polynomials. The analytical formula for the probability density truncated at the first Legendre polynomial is evaluated and used for the calculation of the magnetization and dynamic susceptibility spectra. In the absence of the static magnetic field the presented theory gives the correct single-particle Debye-theory result, which is the exact solution of the Fokker-Planck-Brown equation for the case of applied weak alternating magnetic field. The influence of the static magnetic field on the dynamic susceptibility is analyzed in terms of the low-frequency behavior of the real part and the position of the peak in the imaginary part.

  14. Where is Cultural Astronomy Going?

    NASA Astrophysics Data System (ADS)

    Sims, Lionel

    2015-05-01

    Archaeoastronomy has recently been characterised as 'going round in circles', failing to integrate a rapidly expanding body of data with the interpretive models of anthropology (Ruggles 2011). This paper locates some impediments to disciplinary growth in the legacy of our recent origins, a problematical conceptual vocabulary and a narrow and derivative theoretical base. Proposals are made for an alternative future for the discipline.

  15. The physics of sliding cylinders and curling rocks

    NASA Astrophysics Data System (ADS)

    Penner, A. Raymond

    2001-03-01

    The lateral deflection of a rotating cylindrical shell sliding on one of its ends is considered and both theoretical and experimental results are presented. The coefficient of kinetic friction between a curling rock and an ice surface is then derived and compared with experiment. Current models of the motion of a curling rock are discussed and an alternate hypothesis is presented.

  16. Alternative Theoretical Bases for the Study of Human Communication: The Systems Perspective.

    ERIC Educational Resources Information Center

    Monge, Peter R.

    Three potentially useful perspectives for the scientific development of human communication theory are the law model, the systems approach, and the rules paradigm. It is the purpose of this paper to indicate the utility of the systems approach. The first section of this paper provides a brief account of the systems view of the world. Outlined in…

  17. Wilson Prize Talk

    NASA Astrophysics Data System (ADS)

    Symon, Keith R.

    2005-04-01

    In the late 1950's and the 1960's the MURA (Midwestern Universities Research Association) working group developed fixed field alternating gradient (FFAG) particle accelerators. FFAG accelerators are a natural corollary of the invention of alternating gradient focusing. The fixed guide field accommodates all orbits from the injection to the final energy. For this reason, the transverse motion in the guide field is nearly decoupled from the longitudinal acceleration. This allows a wide variety of acceleration schemes, using betatron or rf accelerating fields, beam stacking, bucket lifts, phase displacement, etc. It also simplifies theoretical and experimental studies of accelerators. Theoretical studies included an extensive analysis of rf acceleration processes, nonlinear orbit dynamics, and collective instabilities. Two FFAG designs, radial sector and spiral sector, were invented. The MURA team built small electron models of each type, and used them to study orbit dynamics, acceleration processes, orbit instabilities, and space charge limits. A practical result of these studies was the invention of the spiral sector cyclotron. Another was beam stacking, which led to the first practical way of achieving colliding beams. A 50 MeV two-way radial sector model was built in which it proved possible to stack a beam of over 10 amperes of electrons.

  18. High monetary reward rates and caloric rewards decrease temporal persistence

    PubMed Central

    Bode, Stefan; Murawski, Carsten

    2017-01-01

    Temporal persistence refers to an individual's capacity to wait for future rewards, while forgoing possible alternatives. This requires a trade-off between the potential value of delayed rewards and opportunity costs, and is relevant to many real-world decisions, such as dieting. Theoretical models have previously suggested that high monetary reward rates, or positive energy balance, may result in decreased temporal persistence. In our study, 50 fasted participants engaged in a temporal persistence task, incentivised with monetary rewards. In alternating blocks of this task, rewards were delivered at delays drawn randomly from distributions with either a lower or higher maximum reward rate. During some blocks participants received either a caloric drink or water. We used survival analysis to estimate participants' probability of quitting conditional on the delay distribution and the consumed liquid. Participants had a higher probability of quitting in blocks with the higher reward rate. Furthermore, participants who consumed the caloric drink had a higher probability of quitting than those who consumed water. Our results support the predictions from the theoretical models, and importantly, suggest that both higher monetary reward rates and physiologically relevant rewards can decrease temporal persistence, which is a crucial determinant for survival in many species. PMID:28228517

  19. High monetary reward rates and caloric rewards decrease temporal persistence.

    PubMed

    Fung, Bowen J; Bode, Stefan; Murawski, Carsten

    2017-02-22

    Temporal persistence refers to an individual's capacity to wait for future rewards, while forgoing possible alternatives. This requires a trade-off between the potential value of delayed rewards and opportunity costs, and is relevant to many real-world decisions, such as dieting. Theoretical models have previously suggested that high monetary reward rates, or positive energy balance, may result in decreased temporal persistence. In our study, 50 fasted participants engaged in a temporal persistence task, incentivised with monetary rewards. In alternating blocks of this task, rewards were delivered at delays drawn randomly from distributions with either a lower or higher maximum reward rate. During some blocks participants received either a caloric drink or water. We used survival analysis to estimate participants' probability of quitting conditional on the delay distribution and the consumed liquid. Participants had a higher probability of quitting in blocks with the higher reward rate. Furthermore, participants who consumed the caloric drink had a higher probability of quitting than those who consumed water. Our results support the predictions from the theoretical models, and importantly, suggest that both higher monetary reward rates and physiologically relevant rewards can decrease temporal persistence, which is a crucial determinant for survival in many species. © 2017 The Authors.

  20. Collaborative deliberation: a model for patient care.

    PubMed

    Elwyn, Glyn; Lloyd, Amy; May, Carl; van der Weijden, Trudy; Stiggelbout, Anne; Edwards, Adrian; Frosch, Dominick L; Rapley, Tim; Barr, Paul; Walsh, Thom; Grande, Stuart W; Montori, Victor; Epstein, Ronald

    2014-11-01

    Existing theoretical work in decision making and behavior change has focused on how individuals arrive at decisions or form intentions. Less attention has been given to theorizing the requirements that might be necessary for individuals to work collaboratively to address difficult decisions, consider new alternatives, or change behaviors. The goal of this work was to develop, as a forerunner to a middle range theory, a conceptual model that considers the process of supporting patients to consider alternative health care options, in collaboration with clinicians, and others. Theory building among researchers with experience and expertise in clinician-patient communication, using an iterative cycle of discussions. We developed a model composed of five inter-related propositions that serve as a foundation for clinical communication processes that honor the ethical principles of respecting individual agency, autonomy, and an empathic approach to practice. We named the model 'collaborative deliberation.' The propositions describe: (1) constructive interpersonal engagement, (2) recognition of alternative actions, (3) comparative learning, (4) preference construction and elicitation, and (5) preference integration. We believe the model underpins multiple suggested approaches to clinical practice that take the form of patient centered care, motivational interviewing, goal setting, action planning, and shared decision making. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Use of qualitative environmental and phenotypic variables in the context of allele distribution models: detecting signatures of selection in the genome of Lake Victoria cichlids.

    PubMed

    Joost, Stéphane; Kalbermatten, Michael; Bezault, Etienne; Seehausen, Ole

    2012-01-01

    When searching for loci possibly under selection in the genome, an alternative to population genetics theoretical models is to establish allele distribution models (ADM) for each locus to directly correlate allelic frequencies and environmental variables such as precipitation, temperature, or sun radiation. Such an approach implementing multiple logistic regression models in parallel was implemented within a computing program named MATSAM: . Recently, this application was improved in order to support qualitative environmental predictors as well as to permit the identification of associations between genomic variation and individual phenotypes, allowing the detection of loci involved in the genetic architecture of polymorphic characters. Here, we present the corresponding methodological developments and compare the results produced by software implementing population genetics theoretical models (DFDIST: and BAYESCAN: ) and ADM (MATSAM: ) in an empirical context to detect signatures of genomic divergence associated with speciation in Lake Victoria cichlid fishes.

  2. Primordial lithium and the standard model(s)

    NASA Technical Reports Server (NTRS)

    Deliyannis, Constantine P.; Demarque, Pierre; Kawaler, Steven D.; Romanelli, Paul; Krauss, Lawrence M.

    1989-01-01

    The results of new theoretical work on surface Li-7 and Li-6 evolution in the oldest halo stars are presented, along with a new and refined analysis of the predicted primordial Li abundance resulting from big-bang nucleosynthesis. This makes it possible to determine the constraints which can be imposed on cosmology using primordial Li and both standard big-bang and stellar-evolution models. This leads to limits on the baryon density today of 0.0044-0.025 (where the Hubble constant is 100h km/sec Mpc) and imposes limitations on alternative nucleosynthesis scenarios.

  3. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  4. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  5. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  6. Darwin without borders? Looking at 'generalised Darwinism' through the prism of the 'hourglass model'.

    PubMed

    Levit, Georgy S; Hossfeld, Uwe

    2011-12-01

    This article critically analyzes the arguments of the 'generalized Darwinism' recently proposed for the analysis of social-economical systems. We argue that 'generalized Darwinism' is both restrictive and empty. It is restrictive because it excludes alternative (non-selectionist) evolutionary mechanisms such as orthogenesis, saltationism and mutationism without any examination of their suitability for modeling socio-economic processes and ignoring their important roles in the development of contemporary evolutionary theory. It is empty, because it reduces Darwinism to an abstract triple-principle scheme (variation, selection and inheritance) thus ignoring the actual structure of Darwinism as a complex and dynamic theoretical structure inseparable from a very detailed system of theoretical constraints. Arguing against 'generalised Darwinism' we present our vision of the history of evolutionary biology with the help of the 'hourglass model' reflecting the internal dynamic of competing theories of evolution.

  7. Shadows and strong gravitational lensing: a brief review

    NASA Astrophysics Data System (ADS)

    Cunha, Pedro V. P.; Herdeiro, Carlos A. R.

    2018-04-01

    For ultra compact objects, light rings and fundamental photon orbits (FPOs) play a pivotal role in the theoretical analysis of strong gravitational lensing effects, and of BH shadows in particular. In this short review, specific models are considered to illustrate how FPOs can be useful in order to understand some non-trivial gravitational lensing effects. This paper aims at briefly overviewing the theoretical foundations of these effects, touching also some of the related phenomenology, both in general relativity and alternative theories of gravity, hopefully providing some intuition and new insights for the underlying physics, which might be critical when testing the Kerr black hole hypothesis.

  8. Modeling of dielectric elastomer as electromechanical resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Bo, E-mail: liboxjtu@mail.xjtu.edu.cn; Liu, Lei; Chen, Hualing

    Dielectric elastomers (DEs) feature nonlinear dynamics resulting from an electromechanical coupling. Under alternating voltage, the DE resonates with tunable performances. We present an analysis of the nonlinear dynamics of a DE as electromechanical resonator (DEER) configured as a pure shear actuator. A theoretical model is developed to characterize the complex performance under different boundary conditions. Physical mechanisms are presented and discussed. Chaotic behavior is also predicted, illustrating instabilities in the dynamics. The results provide a guide to the design and application of DEER in haptic devices.

  9. A Social Justice Alternative for Framing Post-Compulsory Education: A Human Development Perspective of VET in Times of Economic Dominance

    ERIC Educational Resources Information Center

    Lopez-Fogues, Aurora

    2016-01-01

    The article provides an alternative theoretical framework for evaluating contemporary issues facing education, specifically vocational education and training (VET) in Europe. In order to accomplish this, it draws on the theoretical insights of the capability approach in the work of Amartya Sen; the concept of vulnerability as intrinsic to every…

  10. Extracting the Evaluations of Stereotypes: Bi-factor Model of the Stereotype Content Structure

    PubMed Central

    Sayans-Jiménez, Pablo; Cuadrado, Isabel; Rojas, Antonio J.; Barrada, Juan R.

    2017-01-01

    Stereotype dimensions—competence, morality and sociability—are fundamental to studying the perception of other groups. These dimensions have shown moderate/high positive correlations with each other that do not reflect the theoretical expectations. The explanation for this (e.g., halo effect) undervalues the utility of the shared variance identified. In contrast, in this work we propose that this common variance could represent the global evaluation of the perceived group. Bi-factor models are proposed to improve the internal structure and to take advantage of the information representing the shared variance among dimensions. Bi-factor models were compared with first order models and other alternative models in three large samples (300–309 participants). The relationships among the global and specific bi-factor dimensions with a global evaluation dimension (measured through a semantic differential) were estimated. The results support the use of bi-factor models rather than first order models (and other alternative models). Bi-factor models also show a greater utility to directly and more easily explore the stereotype content including its evaluative content. PMID:29085313

  11. Quantitative structure-activation barrier relationship modeling for Diels-Alder ligations utilizing quantum chemical structural descriptors.

    PubMed

    Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana

    2013-10-30

    In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.

  12. Comparación de las predicciones de cosmologías alternativas al modelo estándar con datos del fondo cósmico de radiación

    NASA Astrophysics Data System (ADS)

    Piccirilli, M. P.; Landau, S. J.; León, G.

    2016-08-01

    The cosmic microwave background radiation is one of the most powerful tools to study the early Universe and its evolution, providing also a method to test different cosmological scenarios. We consider alternative inflationary models where the emergence of the seeds of cosmic structure from a perfect isotropic and homogeneous universe can be explained by the self-induced collapse of the inflaton wave function. Some of these alternative models may result indistinguishable from the standard model, while others require to be compared with observational data through statistical analysis. In this article we show results concerning the first Planck release, the Atacama Cosmology Telescope, the South Pole Telescope, the WMAP and Sloan Digital Sky Survey datasets, reaching good agreement between data and theoretical predictions. For future works, we aim to achieve better limits in the cosmological parameters using the last Planck release.

  13. An analysis of possible applications of fuzzy set theory to the actuarial credibility theory

    NASA Technical Reports Server (NTRS)

    Ostaszewski, Krzysztof; Karwowski, Waldemar

    1992-01-01

    In this work, we review the basic concepts of actuarial credibility theory from the point of view of introducing applications of the fuzzy set-theoretic method. We show how the concept of actuarial credibility can be modeled through the fuzzy set membership functions and how fuzzy set methods, especially fuzzy pattern recognition, can provide an alternative tool for estimating credibility.

  14. Examinations of electron temperature calculation methods in Thomson scattering diagnostics.

    PubMed

    Oh, Seungtae; Lee, Jong Ha; Wi, Hanmin

    2012-10-01

    Electron temperature from Thomson scattering diagnostic is derived through indirect calculation based on theoretical model. χ-square test is commonly used in the calculation, and the reliability of the calculation method highly depends on the noise level of input signals. In the simulations, noise effects of the χ-square test are examined and scale factor test is proposed as an alternative method.

  15. Nursing Home Levels of Care: Reimbursement of Resident Specific Costs

    PubMed Central

    Willemain, Thomas R.

    1980-01-01

    The companion paper on nursing home levels of care (Bishop, Plough and Willemain, 1980) recommended a “split-rate” approach to nursing home reimbursement that would distinguish between fixed and variable costs. This paper examines three alternative treatments of the variable cost component of the rate: a two-level system similar to the distinction between skilled and intermediate care facilities, an individualized (“patient-centered”) system, and a system that assigns a single facility-specific rate that depends on the facility's case-mix (“case-mix reimbursement”). The aim is to better understand the theoretical strengths and weaknesses of these three approaches. The comparison of reimbursement alternatives is framed in terms of minimizing reimbursement error, meaning overpayment and underpayment. We develop a conceptual model of reimbursement error that stresses that the features of the reimbursement scheme are only some of the factors contributing to over- and underpayment. The conceptual model is translated into a computer program for quantitative comparison of the alternatives. PMID:10309330

  16. Nursing home levels of care: reimbursement of resident specific costs.

    PubMed

    Willemain, T R

    1980-01-01

    The companion paper on nursing home levels of care (Bishop, Plough and Willemain, 1980) recommended a "split-rate" approach to nursing home reimbursement that would distinguish between fixed and variable costs. This paper examines three alternative treatments of the variable cost component of the rate: a two-level system similar to the distinction between skilled and intermediate care facilities, an individualized ("patient-centered") system, and a system that assigns a single facility-specific rate that depends on the facility's case-mix ("case-mix reimbursement"). The aim is to better understand the theoretical strengths and weaknesses of these three approaches. The comparison of reimbursement alternatives is framed in terms of minimizing reimbursement error, meaning overpayment and underpayment. We develop a conceptual model of reimbursement error that stresses that the features of the reimbursement scheme are only some of the factors contributing to over- and underpayment. The conceptual model is translated into a computer program for quantitative comparison of the alternatives.

  17. Competition for light and nutrients in layered communities of aquatic plants.

    PubMed

    van Gerven, Luuk P A; de Klein, Jeroen J M; Gerla, Daan J; Kooi, Bob W; Kuiper, Jan J; Mooij, Wolf M

    2015-07-01

    Dominance of free-floating plants poses a threat to biodiversity in many freshwater ecosystems. Here we propose a theoretical framework to understand this dominance, by modeling the competition for light and nutrients in a layered community of floating and submerged plants. The model shows that at high supply of light and nutrients, floating plants always dominate due to their primacy for light, even when submerged plants have lower minimal resource requirements. The model also shows that floating-plant dominance cannot be an alternative stable state in light-limited environments but only in nutrient-limited environments, depending on the plants' resource consumption traits. Compared to unlayered communities, the asymmetry in competition for light-coincident with symmetry in competition for nutrients-leads to fundamentally different results: competition outcomes can no longer be predicted from species traits such as minimal resource requirements ([Formula: see text] rule) and resource consumption. Also, the same two species can, depending on the environment, coexist or be alternative stable states. When applied to two common plant species in temperate regions, both the model and field data suggest that floating-plant dominance is unlikely to be an alternative stable state.

  18. Gauge interactions theory and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zichichi, A.

    This volume brings together physicists from around the world to report and discuss the exciting advances made recently in theoretical and experimental aspects of gauge interactions. Following a presentation of the theoretical foundations of and recent developments in gauge fields, the contrib utors fogus on supersymmetry, the derivation of Higgs particles from gauge fields, and heavy leptons. Other chapters discuss the use of quantum chromodynamics in describing basic interactions among quarks and gluons, in predicting the existence of glueballs, and in application to heavy flavor production in strong interactions. The editor, Antonino Zichichi, provides a study of the multiparticle hadronicmore » systems produced in highenergy soft (pp) interactions. Other interesting chapters deal with photon scattering at very high energies and theoretical alternatives to the electroweak model, and the volume concludes with proposals for future experimental facilities for European physics.« less

  19. Insights into the Hydrogen-Atom Transfer of the Blue Aroxyl.

    PubMed

    Bächle, Josua; Marković, Marijana; Kelterer, Anne-Marie; Grampp, Günter

    2017-10-19

    An experimental and theoretical study on hydrogen-atom transfer dynamics in the hydrogen-bonded substituted phenol/phenoxyl complex of the blue aroxyl (2,4,6-tri-tert-butylphenoxyl) is presented. The experimental exchange dynamics is determined in different organic solvents from the temperature-dependent alternating line-width effect in the continuous-wave ESR spectrum. From bent Arrhenius plots, effective tunnelling contributions with parallel heavy-atom motion are concluded. To clarify the transfer mechanism, reaction paths for different conformers of the substituted phenol/phenoxyl complex are modelled theoretically. Various DFT and post-Hartree-Fock methods including multireference methods are applied. From the comparison of experimental and theoretical data it is concluded that the system favours concerted hydrogen-atom transfer along a parabolic reaction path caused by heavy-atom motion. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. The assumption of equilibrium in models of migration.

    PubMed

    Schachter, J; Althaus, P G

    1993-02-01

    In recent articles Evans (1990) and Harrigan and McGregor (1993) (hereafter HM) scrutinized the equilibrium model of migration presented in a 1989 paper by Schachter and Althaus. This model used standard microeconomics to analyze gross interregional migration flows based on the assumption that gross flows are in approximate equilibrium. HM criticized the model as theoretically untenable, while Evans summoned empirical as well as theoretical objections. HM claimed that equilibrium of gross migration flows could be ruled out on theoretical grounds. They argued that the absence of net migration requires that either all regions have equal populations or that unsustainable regional migration propensities must obtain. In fact some moves are inter- and other are intraregional. It does not follow, however, that the number of interregional migrants will be larger for the more populous region. Alternatively, a country could be divided into a large number of small regions that have equal populations. With uniform propensities to move, each of these analytical regions would experience in equilibrium zero net migration. Hence, the condition that net migration equal zero is entirely consistent with unequal distributions of population across regions. The criticisms of Evans were based both on flawed reasoning and on misinterpretation of the results of a number of econometric studies. His reasoning assumed that the existence of demand shifts as found by Goldfarb and Yezer (1987) and Topel (1986) invalidated the equilibrium model. The equilibrium never really obtains exactly, but economic modeling of migration properly begins with a simple equilibrium model of the system. A careful reading of the papers Evans cited in support of his position showed that in fact they affirmed rather than denied the appropriateness of equilibrium modeling. Zero net migration together with nonzero gross migration are not theoretically incompatible with regional heterogeneity of population, wages, or amenities.

  1. Thinking Clearly About Schizotypy: Hewing to the Schizophrenia Liability Core, Considering Interesting Tangents, and Avoiding Conceptual Quicksand

    PubMed Central

    Lenzenweger, Mark F.

    2015-01-01

    The concept of schizotypy represents a rich and complex psychopathology construct. Furthermore, the construct implies a theoretical model that has considerable utility as an organizing framework for the study of schizophrenia, schizophrenia-related psychopathology (eg, delusional disorder, psychosis-NOS (not otherwise specified), schizotypal, and paranoid personality disorder), and putative schizophrenia endophenotypes as suggested by Rado, Meehl, Gottesman, Lenzenweger, and others. The understanding (and misunderstanding) of the schizophrenia-related schizotypy model, particularly as regards clinical illness, as well as an alternative approach to the construct require vigilance in order to ensure the methodological approach continues to yield the fruit that it can in illuminating the pathogenesis of schizophrenia-related psychopathology. The articles in the Special Section in this issue of Schizophrenia Bulletin highlight methodological and theoretical issues that should be examined carefully. PMID:25810061

  2. Use of an expert system data analysis manager for space shuttle main engine test evaluation

    NASA Technical Reports Server (NTRS)

    Abernethy, Ken

    1988-01-01

    The ability to articulate, collect, and automate the application of the expertise needed for the analysis of space shuttle main engine (SSME) test data would be of great benefit to NASA liquid rocket engine experts. This paper describes a project whose goal is to build a rule-based expert system which incorporates such expertise. Experiential expertise, collected directly from the experts currently involved in SSME data analysis, is used to build a rule base to identify engine anomalies similar to those analyzed previously. Additionally, an alternate method of expertise capture is being explored. This method would generate rules inductively based on calculations made using a theoretical model of the SSME's operation. The latter rules would be capable of diagnosing anomalies which may not have appeared before, but whose effects can be predicted by the theoretical model.

  3. How to derive biological information from the value of the normalization constant in allometric equations.

    PubMed

    Kaitaniemi, Pekka

    2008-04-09

    Allometric equations are widely used in many branches of biological science. The potential information content of the normalization constant b in allometric equations of the form Y = bX(a) has, however, remained largely neglected. To demonstrate the potential for utilizing this information, I generated a large number of artificial datasets that resembled those that are frequently encountered in biological studies, i.e., relatively small samples including measurement error or uncontrolled variation. The value of X was allowed to vary randomly within the limits describing different data ranges, and a was set to a fixed theoretical value. The constant b was set to a range of values describing the effect of a continuous environmental variable. In addition, a normally distributed random error was added to the values of both X and Y. Two different approaches were then used to model the data. The traditional approach estimated both a and b using a regression model, whereas an alternative approach set the exponent a at its theoretical value and only estimated the value of b. Both approaches produced virtually the same model fit with less than 0.3% difference in the coefficient of determination. Only the alternative approach was able to precisely reproduce the effect of the environmental variable, which was largely lost among noise variation when using the traditional approach. The results show how the value of b can be used as a source of valuable biological information if an appropriate regression model is selected.

  4. Limitations of Western Medicine and Models of Integration Between Medical Systems.

    PubMed

    Attena, Francesco

    2016-05-01

    This article analyzes two major limitations of Western medicine: maturity and incompleteness. From this viewpoint, Western medicine is considered an incomplete system for the explanation of living matter. Therefore, through appropriate integration with other medical systems, in particular nonconventional approaches, its knowledge base and interpretations may be widened. This article presents possible models of integration of Western medicine with homeopathy, the latter being viewed as representative of all complementary and alternative medicine. To compare the two, a medical system was classified into three levels through which it is possible to distinguish between different medical systems: epistemological (first level), theoretical (second level), and operational (third level). These levels are based on the characterization of any medical system according to, respectively, a reference paradigm, a theory on the functioning of living matter, and clinical practice. The three levels are consistent and closely consequential in the sense that from epistemology derives theory, and from theory derives clinical practice. Within operational integration, four models were identified: contemporary, alternative, sequential, and opportunistic. Theoretical integration involves an explanation of living systems covering simultaneously the molecular and physical mechanisms of functioning living matter. Epistemological integration provides a more thorough and comprehensive explanation of the epistemic concepts of indeterminism, holism, and vitalism to complement the reductionist approach of Western medicine; concepts much discussed by Western medicine while lacking the epistemologic basis for their emplacement. Epistemologic integration could be reached with or without a true paradigm shift and, in the latter, through a model of fusion or subsumption.

  5. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less

  6. Combinatorial compatibility as habit-controlling factor in lysozyme crystallization I. Monomeric and tetrameric F faces derived graph-theoretically

    NASA Astrophysics Data System (ADS)

    Strom, C. S.; Bennema, P.

    1997-03-01

    A series of two articles discusses possible morphological evidence for oligomerization of growth units in the crystallization of tetragonal lysozyme, based on a rigorous graph-theoretic derivation of the F faces. In the first study (Part I), the growth layers are derived as valid networks satisfying the conditions of F slices in the context of the PBC theory using the graph-theoretic method implemented in program FFACE [C.S. Strom, Z. Krist. 172 (1985) 11]. The analysis is performed in monomeric and alternative tetrameric and octameric formulations of the unit cell, assuming tetramer formation according to the strongest bonds. F (flat) slices with thickness Rdhkl ( {1}/{2} < R ≤ 1 ) are predicted theoretically in the forms 1 1 0, 0 1 1, 1 1 1. The relevant energies are established in the broken bond model. The relation between possible oligomeric specifications of the unit cell and combinatorially feasible F slice compositions in these orientations is explored.

  7. Anatomy of the Higgs fits: A first guide to statistical treatments of the theoretical uncertainties

    NASA Astrophysics Data System (ADS)

    Fichet, Sylvain; Moreau, Grégory

    2016-04-01

    The studies of the Higgs boson couplings based on the recent and upcoming LHC data open up a new window on physics beyond the Standard Model. In this paper, we propose a statistical guide to the consistent treatment of the theoretical uncertainties entering the Higgs rate fits. Both the Bayesian and frequentist approaches are systematically analysed in a unified formalism. We present analytical expressions for the marginal likelihoods, useful to implement simultaneously the experimental and theoretical uncertainties. We review the various origins of the theoretical errors (QCD, EFT, PDF, production mode contamination…). All these individual uncertainties are thoroughly combined with the help of moment-based considerations. The theoretical correlations among Higgs detection channels appear to affect the location and size of the best-fit regions in the space of Higgs couplings. We discuss the recurrent question of the shape of the prior distributions for the individual theoretical errors and find that a nearly Gaussian prior arises from the error combinations. We also develop the bias approach, which is an alternative to marginalisation providing more conservative results. The statistical framework to apply the bias principle is introduced and two realisations of the bias are proposed. Finally, depending on the statistical treatment, the Standard Model prediction for the Higgs signal strengths is found to lie within either the 68% or 95% confidence level region obtained from the latest analyses of the 7 and 8 TeV LHC datasets.

  8. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  9. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  10. The black hole at the Galactic Center: Observations and models

    NASA Astrophysics Data System (ADS)

    Zakharov, Alexander F.

    One of the most interesting astronomical objects is the Galactic Center. It is a subject of intensive astronomical observations in different spectral bands in recent years. We concentrate our discussion on a theoretical analysis of observational data of bright stars in the IR-band obtained with large telescopes. We also discuss the importance of VLBI observations of bright structures which could characterize the shadow at the Galactic Center. If we adopt general relativity (GR), there are a number of theoretical models for the Galactic Center, such as a cluster of neutron stars, boson stars, neutrino balls, etc. Some of these models were rejected or the range of their parameters is significantly constrained with consequent observations and theoretical analysis. In recent years, a number of alternative theories of gravity have been proposed because there are dark matter (DM) and dark energy (DE) problems. An alternative theory of gravity may be considered as one possible solution for such problems. Some of these theories have black hole solutions, while other theories have no such solutions. There are attempts to describe the Galactic Center with alternative theories of gravity and in this case one can constrain parameters of such theories with observational data for the Galactic Center. In particular, theories of massive gravity are intensively developing and theorists have overcome pathologies presented in the initial versions of these theories. In theories of massive gravity, a graviton is massive in contrast with GR where a graviton is massless. Now these theories are considered as an alternative to GR. For example, the LIGO-Virgo collaboration obtained the graviton mass constraint of about 1.2 × 10‑22 eV in their first publication about the discovery of the first gravitational wave detection event that resulted of the merger of two massive black holes. Surprisingly, one could obtain a consistent and comparable constraint of graviton mass at a level around mg < 2.9 × 10‑21eV from the analysis of observational data on the trajectory of the star S2 near the Galactic Center. Therefore, observations of bright stars with existing and forthcoming telescopes such as the European extremely large telescope (E-ELT) and the thirty meter telescope (TMT) are extremely useful for investigating the structure of the Galactic Center in the framework of GR, but these observations also give a tool to confirm, rule out or constrain alternative theories of gravity. As we noted earlier, VLBI observations with current and forthcoming global networks (like the Event Horizon Telescope) are used to check the hypothesis about the presence of a supermassive black hole at the Galactic Center.

  11. Evaluating the Psychometric Properties of the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) among Italian Nurses: How Many Factors Must a Researcher Consider?

    PubMed Central

    Loera, Barbara; Converso, Daniela; Viotti, Sara

    2014-01-01

    Background The Maslach Burnout Inventory (MBI) is the mainstream measure for burnout. However, its psychometric properties have been questioned, and alternative measurement models of the inventory have been suggested. Aims Different models for the number of items and factors of the MBI-HSS, the version of the Inventory for the Human Service sector, were tested in order to identify the most appropriate model for measuring burnout in Italy. Methods The study dataset consisted of a sample of 925 nurses. Ten alternative models of burnout were compared using confirmatory factor analysis. The psychometric properties of items and reliability of the MBI-HSS subscales were evaluated. Results Item malfunctioning may confound the MBI-HSS factor structure. The analysis confirmed the factorial structure of the MBI-HSS with a three-dimensional, 20-item assessment. Conclusions The factorial structure underlying the MBI-HSS follows Maslach’s definition when items are reduced from the original 22 to a 20-item set. Alternative models, either with fewer items or with an increased number of latent dimensions in the burnout structure, do not yield better results to justify redefining the item set or theoretically revising the syndrome construct. PMID:25501716

  12. Using Item Response Theory to Develop Measures of Acquisitive and Protective Self-Monitoring From the Original Self-Monitoring Scale.

    PubMed

    Wilmot, Michael P; Kostal, Jack W; Stillwell, David; Kosinski, Michal

    2017-07-01

    For the past 40 years, the conventional univariate model of self-monitoring has reigned as the dominant interpretative paradigm in the literature. However, recent findings associated with an alternative bivariate model challenge the conventional paradigm. In this study, item response theory is used to develop measures of the bivariate model of acquisitive and protective self-monitoring using original Self-Monitoring Scale (SMS) items, and data from two large, nonstudent samples ( Ns = 13,563 and 709). Results indicate that the new acquisitive (six-item) and protective (seven-item) self-monitoring scales are reliable, unbiased in terms of gender and age, and demonstrate theoretically consistent relations to measures of personality traits and cognitive ability. Additionally, by virtue of using original SMS items, previously collected responses can be reanalyzed in accordance with the alternative bivariate model. Recommendations for the reanalysis of archival SMS data, as well as directions for future research, are provided.

  13. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  14. Teaching topography-based and selection-based verbal behavior to developmentally disabled individuals: Some considerations

    PubMed Central

    Shafer, Esther

    1993-01-01

    Augmentative and alternative communication systems are widely recommended for nonvocal developmentally disabled individuals, with selection-based systems becoming increasingly popular. However, theoretical and experimental evidence suggests that topography-based communication systems are easier to learn. This paper discusses research relevant to the ease of acquisition of topography-based and selection-based systems. Additionally, current practices for choosing and designing communication systems are reviewed in order to investigate the extent to which links have been made with available theoretical and experimental knowledge. A stimulus equivalence model is proposed as a clearer direction for practitioners to follow when planning a communication training program. Suggestions for future research are also offered. PMID:22477085

  15. Principles and performance of tapered fiber lasers: from uniform to flared geometry.

    PubMed

    Kerttula, Juho; Filippov, Valery; Chamorovskii, Yuri; Ustimchik, Vasily; Golant, Konstantin; Okhotnikov, Oleg G

    2012-10-10

    We have studied the recently demonstrated concept of fiber lasers based on active tapered double-clad fiber (T-DCF) in copropagating and counterpropagating configurations, both theoretically and experimentally, and compared the performance to fiber lasers based on conventional cylindrical fibers in end-pumped configurations. Specific properties of T-DCFs were considered theoretically using a rate-equation model developed for tapered fibers, and a detailed comparative study was carried out experimentally. Furthermore, we have studied mode coupling effects in long adiabatic tapers due to coiling and local bending. The results allow us to conclude that, with proper fiber design, the T-DCF technology offers a high-potential alternative for bright, cost-effective fiber devices.

  16. Theoretical microbial ecology without species

    NASA Astrophysics Data System (ADS)

    Tikhonov, Mikhail

    2017-09-01

    Ecosystems are commonly conceptualized as networks of interacting species. However, partitioning natural diversity of organisms into discrete units is notoriously problematic and mounting experimental evidence raises the intriguing question whether this perspective is appropriate for the microbial world. Here an alternative formalism is proposed that does not require postulating the existence of species as fundamental ecological variables and provides a naturally hierarchical description of community dynamics. This formalism allows approaching the species problem from the opposite direction. While the classical models treat a world of imperfectly clustered organism types as a perturbation around well-clustered species, the presented approach allows gradually adding structure to a fully disordered background. The relevance of this theoretical construct for describing highly diverse natural ecosystems is discussed.

  17. Prediction of Forming Limit Diagram for Seamed Tube Hydroforming Based on Thickness Gradient Criterion

    NASA Astrophysics Data System (ADS)

    Chen, Xianfeng; Lin, Zhongqin; Yu, Zhongqi; Chen, Xinping; Li, Shuhui

    2011-08-01

    This study establishes the forming limit diagram (FLD) for QSTE340 seamed tube hydroforming by finite element method (FEM) simulation. FLD is commonly obtained from experiment, theoretical calculation and FEM simulation. But for tube hydroforming, both of the experimental and theoretical means are restricted in the application due to the equipment costs and the lack of authoritative theoretical knowledge. In this paper, a novel approach of predicting forming limit using thickness gradient criterion (TGC) is presented for seamed tube hydroforming. Firstly, tube bulge tests and uniaxial tensile tests are performed to obtain the stress-strain curve for tube three parts. Then one FE model for a classical tube free hydroforming and another FE model for a novel experimental apparatus by applying the lateral compression force and the internal pressure are constructed. After that, the forming limit strain is calculated based on TGC in the FEM simulation. Good agreement between the simulation and experimental results is indicated. By combining the TGC and FEM, an alternative way of predicting forming limit with enough accuracy and convenience is provided.

  18. Analysis and design of a second-order digital phase-locked loop

    NASA Technical Reports Server (NTRS)

    Blasche, P. R.

    1979-01-01

    A specific second-order digital phase-locked loop (DPLL) was modeled as a first-order Markov chain with alternatives. From the matrix of transition probabilities of the Markov chain, the steady-state phase error of the DPLL was determined. In a similar manner the loop's response was calculated for a fading input. Additionally, a hardware DPLL was constructed and tested to provide a comparison to the results obtained from the Markov chain model. In all cases tested, good agreement was found between the theoretical predictions and the experimental data.

  19. Random walks with random velocities.

    PubMed

    Zaburdaev, Vasily; Schmiedeberg, Michael; Stark, Holger

    2008-07-01

    We consider a random walk model that takes into account the velocity distribution of random walkers. Random motion with alternating velocities is inherent to various physical and biological systems. Moreover, the velocity distribution is often the first characteristic that is experimentally accessible. Here, we derive transport equations describing the dispersal process in the model and solve them analytically. The asymptotic properties of solutions are presented in the form of a phase diagram that shows all possible scaling regimes, including superdiffusive, ballistic, and superballistic motion. The theoretical results of this work are in excellent agreement with accompanying numerical simulations.

  20. Quadratic integrand double-hybrid made spin-component-scaled

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brémond, Éric, E-mail: eric.bremond@iit.it; Savarese, Marika; Sancho-García, Juan C.

    2016-03-28

    We propose two analytical expressions aiming to rationalize the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) schemes for double-hybrid exchange-correlation density-functionals. Their performances are extensively tested within the framework of the nonempirical quadratic integrand double-hybrid (QIDH) model on energetic properties included into the very large GMTKN30 benchmark database, and on structural properties of semirigid medium-sized organic compounds. The SOS variant is revealed as a less computationally demanding alternative to reach the accuracy of the original QIDH model without losing any theoretical background.

  1. Convoys of care: Theorizing intersections of formal and informal care

    PubMed Central

    Kemp, Candace L.; Ball, Mary M.; Perkins, Molly M.

    2013-01-01

    Although most care to frail elders is provided informally, much of this care is paired with formal care services. Yet, common approaches to conceptualizing the formal–informal intersection often are static, do not consider self-care, and typically do not account for multi-level influences. In response, we introduce the “convoy of care” model as an alternative way to conceptualize the intersection and to theorize connections between care convoy properties and caregiver and recipient outcomes. The model draws on Kahn and Antonucci's (1980) convoy model of social relations, expanding it to include both formal and informal care providers and also incorporates theoretical and conceptual threads from life course, feminist gerontology, social ecology, and symbolic interactionist perspectives. This article synthesizes theoretical and empirical knowledge and demonstrates the convoy of care model in an increasingly popular long-term care setting, assisted living. We conceptualize care convoys as dynamic, evolving, person- and family-specific, and influenced by a host of multi-level factors. Care convoys have implications for older adults’ quality of care and ability to age in place, for job satisfaction and retention among formal caregivers, and for informal caregiver burden. The model moves beyond existing conceptual work to provide a comprehensive, multi-level, multi-factor framework that can be used to inform future research, including research in other care settings, and to spark further theoretical development. PMID:23273553

  2. Modelling verbal aggression, physical aggression and inappropriate sexual behaviour after acquired brain injury

    PubMed Central

    James, Andrew I. W.; Böhnke, Jan R.; Young, Andrew W.; Lewis, Gary J.

    2015-01-01

    Understanding the underpinnings of behavioural disturbances following brain injury is of considerable importance, but little at present is known about the relationships between different types of behavioural disturbances. Here, we take a novel approach to this issue by using confirmatory factor analysis to elucidate the architecture of verbal aggression, physical aggression and inappropriate sexual behaviour using systematic records made across an eight-week observation period for a large sample (n = 301) of individuals with a range of brain injuries. This approach offers a powerful test of the architecture of these behavioural disturbances by testing the fit between observed behaviours and different theoretical models. We chose models that reflected alternative theoretical perspectives based on generalized disinhibition (Model 1), a difference between aggression and inappropriate sexual behaviour (Model 2), or on the idea that verbal aggression, physical aggression and inappropriate sexual behaviour reflect broadly distinct but correlated clinical phenomena (Model 3). Model 3 provided the best fit to the data indicating that these behaviours can be viewed as distinct, but with substantial overlap. These data are important both for developing models concerning the architecture of behaviour as well as for clinical management in individuals with brain injury. PMID:26136449

  3. The physician as a patient educator. From theory to practice.

    PubMed Central

    McCann, D. P.; Blossom, H. J.

    1990-01-01

    Patient nonadherence to therapeutic regimens is a serious issue in the practice of medicine. Empiric studies done by professionals from diverse backgrounds have shown that physicians who use educational strategies can be effective in gaining the cooperation of patients to follow their recommendations. The educational model that currently is most familiar to physicians and the one they use most frequently when educating patients is pedagogy, the theoretic basis for teaching children. Andragogy, a theoretic basis for teaching adults, is now being suggested by medical educators as an alternative model. To illustrate the clinical relevance and application of the andragogic approach, studies focusing on physician behaviors associated with behavioral measures of adherence were reviewed, analyzed, and categorized according to a framework called the "ADULT" model. Physicians in a postgraduate training program who have had exposure to this framework and have incorporated it into their practices report less difficulty functioning as patient educators. The systematic use of this approach can have a positive effect on patient adherence. PMID:2202158

  4. Theoretical Investigations on the Influence of Artificially Altered Rock Mass Properties on Mechanical Excavation

    NASA Astrophysics Data System (ADS)

    Hartlieb, Philipp; Bock, Stefan

    2018-03-01

    This study presents a theoretical analysis of the influence of the rock mass rating on the cutting performance of roadheaders. Existing performance prediction models are assessed for their suitability for forecasting the influence of pre-damaging the rock mass with alternative methods like lasers or microwaves, prior to the mechanical excavation process. Finally, the RMCR model was chosen because it is the only reported model incorporating a range of rock mass properties into its calculations. The results show that even very tough rocks could be mechanically excavated if the occurrence, orientation and condition of joints are favourable for the cutting process. The calculated improvements in the cutting rate (m3/h) are up to 350% for the most favourable cases. In case of microwave irradiation of hard rocks with an UCS of 200 MPa, a reasonable improvement in the performance by 120% can be achieved with as little as an extra 0.7 kWh/m3 (= 1% more energy) compared to cutting only.

  5. Emergent Writing in Preschoolers: Preliminary Evidence for a Theoretical Framework

    PubMed Central

    Puranik, Cynthia S.; Lonigan, Christopher J.

    2014-01-01

    Researchers and educators use the term emergent literacy to refer to a broad set of skills and attitudes that serve as foundational skills for acquiring success in later reading and writing; however, models of emergent literacy have generally focused on reading and reading-related behaviors. Hence, the primary aim of this study was to articulate and evaluate a theoretical model of the components of emergent writing. Alternative models of the structure of individual and developmental differences of emergent writing and writing-related skills were examined in 372 preschool children who ranged in age from 3- to 5-years using confirmatory factor analysis. Results from a confirmatory factor analysis provide evidence that these emergent writing skills are best described by three correlated but distinct factors, (a) Conceptual Knowledge, (b) Procedural Knowledge, and (c) Generative Knowledge. Evidence that these three emergent writing factors show different patterns of relations to emergent literacy constructs is presented. Implications for understanding the development of writing and assessment of early writing skills are discussed. PMID:25316955

  6. Behavioral and Neural Signatures of Reduced Updating of Alternative Options in Alcohol-Dependent Patients during Flexible Decision-Making.

    PubMed

    Reiter, Andrea M F; Deserno, Lorenz; Kallert, Thomas; Heinze, Hans-Jochen; Heinz, Andreas; Schlagenhauf, Florian

    2016-10-26

    Addicted individuals continue substance use despite the knowledge of harmful consequences and often report having no choice but to consume. Computational psychiatry accounts have linked this clinical observation to difficulties in making flexible and goal-directed decisions in dynamic environments via consideration of potential alternative choices. To probe this in alcohol-dependent patients (n = 43) versus healthy volunteers (n = 35), human participants performed an anticorrelated decision-making task during functional neuroimaging. Via computational modeling, we investigated behavioral and neural signatures of inference regarding the alternative option. While healthy control subjects exploited the anticorrelated structure of the task to guide decision-making, alcohol-dependent patients were relatively better explained by a model-free strategy due to reduced inference on the alternative option after punishment. Whereas model-free prediction error signals were preserved, alcohol-dependent patients exhibited blunted medial prefrontal signatures of inference on the alternative option. This reduction was associated with patients' behavioral deficit in updating the alternative choice option and their obsessive-compulsive drinking habits. All results remained significant when adjusting for potential confounders (e.g., neuropsychological measures and gray matter density). A disturbed integration of alternative choice options implemented by the medial prefrontal cortex appears to be one important explanation for the puzzling question of why addicted individuals continue drug consumption despite negative consequences. In addiction, patients maintain substance use despite devastating consequences and often report having no choice but to consume. These clinical observations have been theoretically linked to disturbed mechanisms of inference, for example, to difficulties when learning statistical regularities of the environmental structure to guide decisions. Using computational modeling, we demonstrate disturbed inference on alternative choice options in alcohol addiction. Patients neglecting "what might have happened" was accompanied by blunted coding of inference regarding alternative choice options in the medial prefrontal cortex. An impaired integration of alternative choice options implemented by the medial prefrontal cortex might contribute to ongoing drug consumption in the face of evident negative consequences. Copyright © 2016 the authors 0270-6474/16/3610935-14$15.00/0.

  7. Identification of damping in a bridge using a moving instrumented vehicle

    NASA Astrophysics Data System (ADS)

    González, A.; OBrien, E. J.; McGetrick, P. J.

    2012-08-01

    In recent years, there has been a significant increase in the number of bridges which are being instrumented and monitored on an ongoing basis. This is in part due to the introduction of bridge management systems designed to provide a high level of protection to the public and early warning if the bridge becomes unsafe. This paper investigates a novel alternative; a low-cost method consisting of the use of a vehicle fitted with accelerometers on its axles to monitor the dynamic behaviour of bridges. A simplified half-car vehicle-bridge interaction model is used in theoretical simulations to test the effectiveness of the approach in identifying the damping ratio of the bridge. The method is tested for a range of bridge spans and vehicle velocities using theoretical simulations and the influences of road roughness, initial vibratory condition of the vehicle, signal noise, modelling errors and frequency matching on the accuracy of the results are investigated.

  8. Joint statistics of strongly correlated neurons via dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Deniz, Taşkın; Rotter, Stefan

    2017-06-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

  9. Courses of action for effects based operations using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Haider, Sajjad; Levis, Alexander H.

    2006-05-01

    This paper presents an Evolutionary Algorithms (EAs) based approach to identify effective courses of action (COAs) in Effects Based Operations. The approach uses Timed Influence Nets (TINs) as the underlying mathematical model to capture a dynamic uncertain situation. TINs provide a concise graph-theoretic probabilistic approach to specify the cause and effect relationships that exist among the variables of interest (actions, desired effects, and other uncertain events) in a problem domain. The purpose of building these TIN models is to identify and analyze several alternative courses of action. The current practice is to use trial and error based techniques which are not only labor intensive but also produce sub-optimal results and are not capable of modeling constraints among actionable events. The EA based approach presented in this paper is aimed to overcome these limitations. The approach generates multiple COAs that are close enough in terms of achieving the desired effect. The purpose of generating multiple COAs is to give several alternatives to a decision maker. Moreover, the alternate COAs could be generalized based on the relationships that exist among the actions and their execution timings. The approach also allows a system analyst to capture certain types of constraints among actionable events.

  10. Multimodality: a basis for augmentative and alternative communication--psycholinguistic, cognitive, and clinical/educational aspects.

    PubMed

    Loncke, Filip T; Campbell, Jamie; England, Amanda M; Haley, Tanya

    2006-02-15

    Message generating is a complex process involving a number of processes, including the selection of modes to use. When expressing a message, human communicators typically use a combination of modes. This phenomenon is often termed multimodality. This article explores the use of models that explain multimodality as an explanatory framework for augmentative and alternative communication (AAC). Multimodality is analysed from a communication, psycholinguistic, and cognitive perspective. Theoretical and applied topics within AAC can be explained or described within the multimodality framework considering iconicity, simultaneous communication, lexical organization, and compatibility of communication modes. Consideration of multimodality is critical to understanding underlying processes in individuals who use AAC and individuals who interact with them.

  11. Heating of cardiovascular stents in intense radiofrequency magnetic fields.

    PubMed

    Foster, K R; Goldberg, R; Bonsignore, C

    1999-01-01

    We consider the heating of a metal stent in an alternating magnetic field from an induction heating furnace. An approximate theoretical analysis is conducted to estimate the magnetic field strength needed to produce substantial temperature increases. Experiments of stent heating in industrial furnaces are reported, which confirm the model. The results show that magnetic fields inside inductance furnaces are capable of significantly heating stents. However, the fields fall off very quickly with distance and in most locations outside the heating coil, field levels are far too small to produce significant heating. The ANSI/IEEE C95.1-1992 limits for human exposure to alternating magnetic fields provide adequate protection against potential excessive heating of the stents.

  12. Alternative Models for Individualized Armor Training. Part I. Interim Report: Review and Analysis of the Literature

    DTIC Science & Technology

    1980-01-01

    for an individualized instructional con- text is provided by Giordono (1975), in his discussion of the design of a " non - lockstep educational system...state of ATI research, sum- marized the methodological and theoretical problems that may have inhibited the application of ATI findings to the design ...years. In contrast, systematic modifications based on results obtained through the application of appropriate experimental designs are desired and

  13. Rotational versus alternating hysteresis losses in nonoriented soft magnetic laminations

    NASA Astrophysics Data System (ADS)

    Fiorillo, F.; Rietto, A. M.

    1993-05-01

    Rotational and alternating hysteresis losses have been investigated in theory and experiment in nonoriented soft magnetic laminations. Attention has been focused on the dependence of energy loss on peak magnetization Ip. The experiments, performed in a wide induction range (˜2×10-4 T≤Ip≤˜1.6 T), show that the ratio between rotational and alternating energy losses Whr/Wha is a monotonically decreasing function of Ip. A quantitative theoretical investigation is carried out through modeling of the magnetization process under rotating field and its relation to processes under alternating field. Three basic mechanisms of magnetization rotation are considered: linear combination of unidirectional hysteresis loops at low inductions (Rayleigh region), cyclic rearrangement of magnetic domains between different easy directions at intermediate inductions, and coherent spin rotation toward the approach to magnetic saturation. The ensuing predicted behavior of Whr/Wha is found to be in good agreement with the experiments performed in nonoriented low carbon steel and 3% FeSi laminations.

  14. The impact of electrostatic correlations on Dielectrophoresis of Non-conducting Particles

    NASA Astrophysics Data System (ADS)

    Alidoosti, Elaheh; Zhao, Hui

    2017-11-01

    The dipole moment of a charged, dielectric, spherical particle under the influence of a uniform alternating electric field is computed theoretically and numerically by solving the modified continuum Poisson-Nernst-Planck (PNP) equations accounting for ion-ion electrostatic correlations that is important at concentrated electrolytes (Phys. Rev. Lett. 106, 2011). The dependence on the frequency, zeta potential, electrostatic correlation lengths, and double layer thickness is thoroughly investigated. In the limit of thin double layers, we carry out asymptotic analysis to develop simple models which are in good agreement with the modified PNP model. Our results suggest that the electrostatic correlations have a complicated impact on the dipole moment. As the electrostatic correlations length increases, the dipole moment decreases, initially, reach a minimum, and then increases since the surface conduction first decreases and then increases due to the ion-ion correlations. The modified PNP model can improve the theoretical predictions particularly at low frequencies where the simple model can't qualitatively predict the dipole moment. This work was supported, in part, by NIH R15GM116039.

  15. The interaction of moderately strong shock waves with thick perforated walls of low porosity

    NASA Technical Reports Server (NTRS)

    Grant, D. J.

    1972-01-01

    A theoretical prediction is given of the flow through thick perforated walls of low porosity resulting from the impingement of a moderately strong traveling shock wave. The model was a flat plate positioned normal to the direction of the flow. Holes bored in the plate parallel to the direction of the flow provided nominal hole length-to-diameter ratios of 10:1 and an axial porosity of 25 percent of the flow channel cross section. The flow field behind the reflected shock wave was assumed to behave as a reservoir producing a quasi-steady duct flow through the model. Rayleigh and Fanno duct flow theoretical computations for each of three possible auxiliary wave patterns that can be associated with the transmitted shock (to satisfy contact surface compatibility) were used to provide bounding solutions as an alternative to the more complex influence coefficients method. Qualitative and quantitative behavior was verified in a 1.5- by 2.0-in. helium shock tube. High speed Schlieren photography, piezoelectric pressure-time histories, and electronic-counter wave speed measurements were used to assess the extent of correlation with the theoretical flow models. Reduced data indicated the adequacy of the bounding theory approach to predict wave phenomena and quantitative response.

  16. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  17. Force-induced bone growth and adaptation: A system theoretical approach to understanding bone mechanotransduction

    NASA Astrophysics Data System (ADS)

    Maldonado, Solvey; Findeisen, Rolf

    2010-06-01

    The modeling, analysis, and design of treatment therapies for bone disorders based on the paradigm of force-induced bone growth and adaptation is a challenging task. Mathematical models provide, in comparison to clinical, medical and biological approaches an structured alternative framework to understand the concurrent effects of the multiple factors involved in bone remodeling. By now, there are few mathematical models describing the appearing complex interactions. However, the resulting models are complex and difficult to analyze, due to the strong nonlinearities appearing in the equations, the wide range of variability of the states, and the uncertainties in parameters. In this work, we focus on analyzing the effects of changes in model structure and parameters/inputs variations on the overall steady state behavior using systems theoretical methods. Based on an briefly reviewed existing model that describes force-induced bone adaptation, the main objective of this work is to analyze the stationary behavior and to identify plausible treatment targets for remodeling related bone disorders. Identifying plausible targets can help in the development of optimal treatments combining both physical activity and drug-medication. Such treatments help to improve/maintain/restore bone strength, which deteriorates under bone disorder conditions, such as estrogen deficiency.

  18. A theoretical-electron-density databank using a model of real and virtual spherical atoms.

    PubMed

    Nassour, Ayoub; Domagala, Slawomir; Guillot, Benoit; Leduc, Theo; Lecomte, Claude; Jelsch, Christian

    2017-08-01

    A database describing the electron density of common chemical groups using combinations of real and virtual spherical atoms is proposed, as an alternative to the multipolar atom modelling of the molecular charge density. Theoretical structure factors were computed from periodic density functional theory calculations on 38 crystal structures of small molecules and the charge density was subsequently refined using a density model based on real spherical atoms and additional dummy charges on the covalent bonds and on electron lone-pair sites. The electron-density parameters of real and dummy atoms present in a similar chemical environment were averaged on all the molecules studied to build a database of transferable spherical atoms. Compared with the now-popular databases of transferable multipolar parameters, the spherical charge modelling needs fewer parameters to describe the molecular electron density and can be more easily incorporated in molecular modelling software for the computation of electrostatic properties. The construction method of the database is described. In order to analyse to what extent this modelling method can be used to derive meaningful molecular properties, it has been applied to the urea molecule and to biotin/streptavidin, a protein/ligand complex.

  19. Freight Calculation Model: A Case Study of Coal Distribution

    NASA Astrophysics Data System (ADS)

    Yunianto, I. T.; Lazuardi, S. D.; Hadi, F.

    2018-03-01

    Coal has been known as one of energy alternatives that has been used as energy source for several power plants in Indonesia. During its transportation from coal sites to power plant locations is required the eligible shipping line services that are able to provide the best freight rate. Therefore, this study aims to obtain the standardized formulations for determining the ocean freight especially for coal distribution based on the theoretical concept. The freight calculation model considers three alternative transport modes commonly used in coal distribution: tug-barge, vessel and self-propelled barge. The result shows there are two cost components very dominant in determining the value of freight with the proportion reaching 90% or even more, namely: time charter hire and fuel cost. Moreover, there are three main factors that have significant impacts on the freight calculation, which are waiting time at ports, time charter rate and fuel oil price.

  20. With or without rafts? Alternative views on cell membranes.

    PubMed

    Sevcsik, Eva; Schütz, Gerhard J

    2016-02-01

    The fundamental mechanisms of protein and lipid organization at the plasma membrane have continued to engage researchers for decades. Among proposed models, one idea has been particularly successful which assumes that sterol-dependent nanoscopic phases of different lipid chain order compartmentalize proteins, thereby modulating protein functionality. This model of membrane rafts has sustainably sparked the fields of membrane biophysics and biology, and shifted membrane lipids into the spotlight of research; by now, rafts have become an integral part of our terminology to describe a variety of cell biological processes. But is the evidence clear enough to continue supporting a theoretical concept which has resisted direct proof by observation for nearly twenty years? In this essay, we revisit findings that gave rise to and substantiated the raft hypothesis, discuss its impact on recent studies, and present alternative mechanisms to account for plasma membrane heterogeneity. © 2015 WILEY Periodicals, Inc.

  1. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine

    PubMed Central

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2016-01-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473

  2. Wave-induced hydraulic forces on submerged aquatic plants in shallow lakes.

    PubMed

    Schutten, J; Dainty, J; Davy, A J

    2004-03-01

    Hydraulic pulling forces arising from wave action are likely to limit the presence of freshwater macrophytes in shallow lakes, particularly those with soft sediments. The aim of this study was to develop and test experimentally simple models, based on linear wave theory for deep water, to predict such forces on individual shoots. Models were derived theoretically from the action of the vertical component of the orbital velocity of the waves on shoot size. Alternative shoot-size descriptors (plan-form area or dry mass) and alternative distributions of the shoot material along its length (cylinder or inverted cone) were examined. Models were tested experimentally in a flume that generated sinusoidal waves which lasted 1 s and were up to 0.2 m high. Hydraulic pulling forces were measured on plastic replicas of Elodea sp. and on six species of real plants with varying morphology (Ceratophyllum demersum, Chara intermedia, Elodea canadensis, Myriophyllum spicatum, Potamogeton natans and Potamogeton obtusifolius). Measurements on the plastic replicas confirmed predicted relationships between force and wave phase, wave height and plant submergence depth. Predicted and measured forces were linearly related over all combinations of wave height and submergence depth. Measured forces on real plants were linearly related to theoretically derived predictors of the hydraulic forces (integrals of the products of the vertical orbital velocity raised to the power 1.5 and shoot size). The general applicability of the simplified wave equations used was confirmed. Overall, dry mass and plan-form area performed similarly well as shoot-size descriptors, as did the conical or cylindrical models of shoot distribution. The utility of the modelling approach in predicting hydraulic pulling forces from relatively simple plant and environmental measurements was validated over a wide range of forces, plant sizes and species.

  3. High-amplitude fluctuations and alternative dynamical states of midges in Lake Myvatn.

    PubMed

    Ives, Anthony R; Einarsson, Arni; Jansen, Vincent A A; Gardarsson, Arnthor

    2008-03-06

    Complex dynamics are often shown by simple ecological models and have been clearly demonstrated in laboratory and natural systems. Yet many classes of theoretically possible dynamics are still poorly documented in nature. Here we study long-term time-series data of a midge, Tanytarsus gracilentus (Diptera: Chironomidae), in Lake Myvatn, Iceland. The midge undergoes density fluctuations of almost six orders of magnitude. Rather than regular cycles, however, these fluctuations have irregular periods of 4-7 years, indicating complex dynamics. We fit three consumer-resource models capable of qualitatively distinct dynamics to the data. Of these, the best-fitting model shows alternative dynamical states in the absence of environmental variability; depending on the initial midge densities, the model shows either fluctuations around a fixed point or high-amplitude cycles. This explains the observed complex population dynamics: high-amplitude but irregular fluctuations occur because stochastic variability causes the dynamics to switch between domains of attraction to the alternative states. In the model, the amplitude of fluctuations depends strongly on minute resource subsidies into the midge habitat. These resource subsidies may be sensitive to human-caused changes in the hydrology of the lake, with human impacts such as dredging leading to higher-amplitude fluctuations. Tanytarsus gracilentus is a key component of the Myvatn ecosystem, representing two-thirds of the secondary productivity of the lake and providing vital food resources to fish and to breeding bird populations. Therefore the high-amplitude, irregular fluctuations in midge densities generated by alternative dynamical states dominate much of the ecology of the lake.

  4. Choosing where to work at work - towards a theoretical model of benefits and risks of activity-based flexible offices.

    PubMed

    Wohlers, Christina; Hertel, Guido

    2017-04-01

    Although there is a trend in today's organisations to implement activity-based flexible offices (A-FOs), only a few studies examine consequences of this new office type. Moreover, the underlying mechanisms why A-FOs might lead to different consequences as compared to cellular and open-plan offices are still unclear. This paper introduces a theoretical framework explaining benefits and risks of A-FOs based on theories from work and organisational psychology. After deriving working conditions specific for A-FOs (territoriality, autonomy, privacy, proximity and visibility), differences in working conditions between A-FOs and alternative office types are proposed. Further, we suggest how these differences in working conditions might affect work-related consequences such as well-being, satisfaction, motivation and performance on the individual, the team and the organisational level. Finally, we consider task-related (e.g. task variety), person-related (e.g. personality) and organisational (e.g. leadership) moderators. Based on this model, future research directions as well as practical implications are discussed. Practitioner Summary: Activity-based flexible offices (A-FOs) are popular in today's organisations. This article presents a theoretical model explaining why and when working in an A-FO evokes benefits and risks for individuals, teams and organisations. According to the model, A-FOs are beneficial when management encourages employees to use the environment appropriately and supports teams.

  5. How 'alternative' is CAM? Rethinking conventional dichotomies between biomedicine and complementary/alternative medicine.

    PubMed

    Ning, Ana M

    2013-03-01

    The aim of this article is to interrogate the pervasive dichotomization of 'conventional' and 'alternative' therapies in popular, academic and medical literature. Specifically, I rethink the concepts such as holism, vitalism, spirituality, natural healing and individual responsibility for health care as taken-for-granted alternative ideologies. I explore how these ideologies are not necessarily 'alternative', but integral to the practice of clinical medicine as well as socially and culturally dominant values, norms and practices related to health and health care in Canada and elsewhere. These reflections address both theoretical and applied concerns central to the study of integration of different medical practices in western industrialized nations such as Canada. Overall, in examining homologies present in both biomedicine and complementary/alternative medicine (CAM), this article rethinks major social practices against binary oppositions by illustrating through literature review that the biomedical and CAM models may be homologous in their original inceptions and in recent cross-fertilizations towards a rigorous approach in medicine. By highlighting biomedicine and CAM as homologous symbolic systems, this article also sheds light on the potential for enhancing dialogue between diverse perspectives to facilitate an integrative health care system that meets multiple consumer needs.

  6. Chromatic control in coextruded layered polymer microlenses

    NASA Astrophysics Data System (ADS)

    Crescimanno, Michael; Oder, Tom N.; Andrews, James H.; Zhou, Chuanhong; Petrus, Joshua B.; Merlo, Cory; Bagheri, Cameron; Hetzel, Connor; Tancabel, James; Singer, Kenneth D.; Baer, Eric

    2014-12-01

    We describe the formation, characterization and theoretical understanding of microlenses comprised of alternating polystyrene and polymethylmethacrylate layers produced by multilayer coextrusion. These lenses are fabricated by photolithography, using a grayscale mask followed by plasma etching, so that the refractive index alternation of the bilayer stack appears across the radius of the microlens. The alternating quarter-wave thick layers form a one-dimensional photonic crystal whose dispersion augments the material dispersion, allowing one to sculpt the chromatic dispersion of the lens by adjusting the layered structure. Using Huygen's principle, we model our experimental measurements of the focal length of these lenses across the reflection band of the multilayer polymer film from which the microlens is fashioned. For a 56 micron diameter multilayered lens of focal length 300 microns, we measured a nearly 25 percent variation in the focal length across a shallow, 50 nm-wide reflection band.

  7. Beyond the rhetoric: what do we mean by a 'model of care'?

    PubMed

    Davidson, Patricia; Halcomb, Elizabeth; Hickman, L; Phillips, J; Graham, B

    2006-01-01

    Contemporary health care systems are constantly challenged to revise traditional methods of health care delivery. These challenges are multifaceted and stem from: (1) novel pharmacological and non-pharmacological treatments; (2) changes in consumer demands and expectations; (3) fiscal and resource constraints; (4) changes in societal demographics in particular the ageing of society; (5) an increasing burden of chronic disease; (6) documentation of limitations in traditional health care delivery; (7) increased emphasis on transparency, accountability, evidence-based practice (EBP) and clinical governance structures; and (8) the increasing cultural diversity of the community. These challenges provoke discussion of potential alternative models of care, with scant reference to defining what constitutes a model of care. This paper aims to define what is meant by the term 'model of care' and document the pragmatic systems and processes necessary to develop, plan, implement and evaluate novel models of care delivery. Searches of electronic databases, the reference lists of published materials, policy documents and the Internet were conducted using key words including 'model*', 'framework*', 'models, theoretical' and 'nursing models, theoretical'. The collated material was then analysed and synthesised into this review. This review determined that in addition to key conceptual and theoretical perspectives, quality improvement theory (eg. collaborative methodology), project management methods and change management theory inform both pragmatic and conceptual elements of a model of care. Crucial elements in changing health care delivery through the development of innovative models of care include the planning, development, implementation, evaluation and assessment of the sustainability of the new model. Regardless of whether change in health care delivery is attempted on a micro basis (eg. ward level) or macro basis (eg. national or state system) in order to achieve sustainable, effective and efficient changes a well-planned, systematic process is essential.

  8. Models of consumer value cocreation in health care.

    PubMed

    Nambisan, Priya; Nambisan, Satish

    2009-01-01

    In recent years, consumer participation in health care has gained critical importance as health care organizations (HCOs) seek varied avenues to enhance the quality and the value of their offerings. Many large HCOs have established online health communities where health care consumers (patients) can interact with one another to share knowledge and offer emotional support in disease management and care. Importantly, the focus of consumer participation in health care has moved beyond such personal health care management as the potential for consumers to participate in innovation and value creation in varied areas of the health care industry becomes increasingly evident. Realizing such potential, however, will require HCOs to develop a better understanding of the varied types of consumer value cocreation that are enabled by new information and communication technologies such as online health communities and Web 2.0 (social media) technologies. This article seeks to contribute toward such an understanding by offering a concise and coherent theoretical framework to analyze consumer value cocreation in health care. We identify four alternate models of consumer value cocreation-the partnership model, the open-source model, the support-group model, and the diffusion model-and discuss their implications for HCOs. We develop our theoretical framework by drawing on theories and concepts in knowledge creation, innovation management, and online communities. A set of propositions are developed by combining theoretical insights from these areas with real-world examples of consumer value cocreation in health care. The theoretical framework offered here informs on the potential impact of the different models of consumer value cocreation on important organizational variables such as innovation cost and time, service quality, and consumer perceptions of HCO. An understanding of the four models of consumer value cocreation can help HCOs adopt appropriate strategies and practices to embrace consumers as partners in the development and delivery of innovative health care products and services.

  9. Selected Perspectives on Legal Problems of the Alternative Press.

    ERIC Educational Resources Information Center

    De Mott, John

    The legal problems faced by publishers of alternative newspapers are often compounded by the limited availability of the funds they have either for legal defense or for initiating lawsuits. Although both the courts and journalism's professional associations theoretically support the position that the alternative press possesses rights identical to…

  10. Residential demand for energy. Volume 1: Residential energy demand in the US

    NASA Astrophysics Data System (ADS)

    Taylor, L. D.; Blattenberger, G. R.; Rennhack, R. K.

    1982-04-01

    Updated and improved versions of the residential energy demand models that are currently used in EPRI's Demand 80/81 Model are presented. The primary objective of the study is the development and estimation of econometric demand models that take into account in a theoretically appropriate way the problems caused by decreasing-block pricing in the sale of electricity and natural gas. An ancillary objective is to take into account the impact on electricity, natural gas, and fuel oil demands of differences and changes in the availability of natural gas. Econometric models of residential demand are estimated for all three fuel tyes using time series data by state. Price and income elasticities for a number of alternative models are presented.

  11. Theoretical framework and methodological development of common subjective health outcome measures in osteoarthritis: a critical review

    PubMed Central

    Pollard, Beth; Johnston, Marie; Dixon, Diane

    2007-01-01

    Subjective measures involving clinician ratings or patient self-assessments have become recognised as an important tool for the assessment of health outcome. The value of a health outcome measure is usually assessed by a psychometric evaluation of its reliability, validity and responsiveness. However, psychometric testing involves an accumulation of evidence and has recognised limitations. It has been suggested that an evaluation of how well a measure has been developed would be a useful additional criteria in assessing the value of a measure. This paper explored the theoretical background and methodological development of subjective health status measures commonly used in osteoarthritis research. Fourteen subjective health outcome measures commonly used in osteoarthritis research were examined. Each measure was explored on the basis of their i) theoretical framework (was there a definition of what was being assessed and was it part of a theoretical model?) and ii) methodological development (what was the scaling strategy, how were the items generated and reduced, what was the response format and what was the scoring method?). Only the AIMS, SF-36 and WHOQOL defined what they were assessing (i.e. the construct of interest) and no measure assessed was part of a theoretical model. None of the clinician report measures appeared to have implemented a scaling procedure or described the rationale for the items selected or scoring system. Of the patient self-report measures, the AIMS, MPQ, OXFORD, SF-36, WHOQOL and WOMAC appeared to follow a standard psychometric scaling method. The DRP and EuroQol used alternative scaling methods. The review highlighted the general lack of theoretical framework for both clinician report and patient self-report measures. This review also drew attention to the wide variation in the methodological development of commonly used measures in OA. While, in general the patient self-report measures had good methodological development, the clinician report measures appeared less well developed. It would be of value if new measures defined the construct of interest and, that the construct, be part of theoretical model. By ensuring measures are both theoretically and empirically valid then improvements in subjective health outcome measures should be possible. PMID:17343739

  12. Controlled modification of resonant tunneling in metal-insulator-insulator-metal structures

    NASA Astrophysics Data System (ADS)

    Mitrovic, I. Z.; Weerakkody, A. D.; Sedghi, N.; Ralph, J. F.; Hall, S.; Dhanak, V. R.; Luo, Z.; Beeby, S.

    2018-01-01

    We present comprehensive experimental and theoretical work on tunnel-barrier rectifiers comprising bilayer (Nb2O5/Al2O3) insulator configurations with similar (Nb/Nb) and dissimilar (Nb/Ag) metal electrodes. The electron affinity, valence band offset, and metal work function were ascertained by X-ray photoelectron spectroscopy, variable angle spectroscopic ellipsometry, and electrical measurements on fabricated reference structures. The experimental band line-up parameters were fed into a theoretical model to predict available bound states in the Nb2O5/Al2O3 quantum well and generate tunneling probability and transmittance curves under applied bias. The onset of strong resonance in the sub-V regime was found to be controlled by a work function difference of Nb/Ag electrodes in agreement with the experimental band alignment and theoretical model. A superior low-bias asymmetry of 35 at 0.1 V and a responsivity of 5 A/W at 0.25 V were observed for the Nb/4 nm Nb2O5/1 nm Al2O3/Ag structure, sufficient to achieve a rectification of over 90% of the input alternate current terahertz signal in a rectenna device.

  13. Alternating steady state free precession for estimation of current-induced magnetic flux density: A feasibility study.

    PubMed

    Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok

    2016-05-01

    To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.

  14. Theorizing a model information pathway to mitigate the menstrual taboo.

    PubMed

    Yagnik, Arpan

    2017-12-13

    The impact of menstruation on the society is directly seen in the educational opportunities, quality of life and professional endeavors of females. However, lack of menstrual hygiene management has indirect implication on the balance and development of the society and nation. This study is set in the Indian context. The researcher identifies actors with a potential of mitigating menstrual taboo and then theorizes an optimal information pathway to mitigate menstrual taboo. Diffusion of innovation, framing and agenda setting theories contribute as frameworks in the creation of an optimal pathway to dissolve the menstrual taboo. The actors identified in this model are scholars, health activists, students, NGOs, media, government, corporations and villages or communities. The determinants for the direction and the order of the pathway to diffuse knowledge and confidence among these actors are the ultimate goal and sustainability of the model, strengths and weaknesses of actors, and actors' extent of influence. Considering the absence of an existing alternate, this model pathway provides a solid framework purely from a theoretical perspective. Theoretically, this model pathway is possible, practical and optimal. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Testing adaptive toolbox models: a Bayesian hierarchical approach.

    PubMed

    Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

  16. On explicit algebraic stress models for complex turbulent flows

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Speziale, C. G.

    1992-01-01

    Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.

  17. Fast, Statistical Model of Surface Roughness for Ion-Solid Interaction Simulations and Efficient Code Coupling

    NASA Astrophysics Data System (ADS)

    Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian

    2017-10-01

    Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.

  18. Mass transport in micellar surfactant solutions: 2. Theoretical modeling of adsorption at a quiescent interface.

    PubMed

    Danov, K D; Kralchevsky, P A; Denkov, N D; Ananthapadmanabhan, K P; Lips, A

    2006-01-31

    Here, we apply the detailed theoretical model of micellar kinetics from part 1 of this study to the case of surfactant adsorption at a quiescent interface, i.e., to the relaxation of surface tension and adsorption after a small initial perturbation. Our goal is to understand why for some surfactant solutions the surface tension relaxes as inverse-square-root of time, 1/t(1/2), but two different expressions for the characteristic relaxation time are applicable to different cases. In addition, our aim is to clarify why for other surfactant solutions the surface tension relaxes exponentially. For this goal, we carried out a computer modeling of the adsorption process, based on the general system of equations derived in part 1. This analysis reveals the existence of four different consecutive relaxation regimes (stages) for a given micellar solution: two exponential regimes and two inverse-square-root regimes, following one after another in alternating order. Experimentally, depending on the specific surfactant and method, one usually registers only one of these regimes. Therefore, to interpret properly the data, one has to identify which of these four kinetic regimes is observed in the given experiment. Our numerical results for the relaxation of the surface tension, micelle concentration and aggregation number are presented in the form of kinetic diagrams, which reveal the stages of the relaxation process. At low micelle concentrations, "rudimentary" kinetic diagrams could be observed, which are characterized by merging of some stages. Thus, the theoretical modeling reveals a general and physically rich picture of the adsorption process. To facilitate the interpretation of experimental data, we have derived convenient theoretical expressions for the time dependence of surface tension and adsorption in each of the four regimes.

  19. On the estimation of cooperativity in ion channel kinetics: activation free energy and kinetic mechanism of Shaker K+ channel.

    PubMed

    Banerjee, Kinshuk; Das, Biswajit; Gangopadhyay, Gautam

    2013-04-28

    In this paper, we have explored generic criteria of cooperative behavior in ion channel kinetics treating it on the same footing with multistate receptor-ligand binding in a compact theoretical framework. We have shown that the characterization of cooperativity of ion channels in terms of the Hill coefficient violates the standard Hill criteria defined for allosteric cooperativity of ligand binding. To resolve the issue, an alternative measure of cooperativity is proposed here in terms of the cooperativity index that sets a unified criteria for both the systems. More importantly, for ion channel this index can be very useful to describe the cooperative kinetics as it can be readily determined from the experimentally measured ionic current combined with theoretical modelling. We have analyzed the correlation between the voltage value and slope of the voltage-activation curve at the half-activation point and consequently determined the standard free energy of activation of the ion channel using two well-established mechanisms of cooperativity, namely, Koshland-Nemethy-Filmer (KNF) and Monod-Wyman-Changeux (MWC) models. Comparison of the theoretical results for both the models with appropriate experimental data of mutational perturbation of Shaker K(+) channel supports the experimental fact that the KNF model is more suitable to describe the cooperative behavior of this class of ion channels, whereas the performance of the MWC model is unsatisfactory. We have also estimated the mechanistic performance through standard free energy of channel activation for both the models and proposed a possible functional disadvantage in the MWC scheme.

  20. Understanding Alternative Education: A Mixed Methods Examination of Student Experiences

    ERIC Educational Resources Information Center

    Farrelly, Susan Glassett; Daniels, Erika

    2014-01-01

    Alternative education plays a critical role in the opportunity gap that persists in the US public education system. However, there has been little research on alternative schools. Scaffolded by a theoretical framework constructed from critical theory, self-determination theory (SDT) and student voice, this research examined how well students in…

  1. Investigation of numerical simulation on all-optical flip-flop stability maps of 1550nm vertical-cavity surface-emitting laser

    NASA Astrophysics Data System (ADS)

    Li, Jun; Xia, Qing; Wang, Xiaofa

    2017-10-01

    Based on the extended spin-flip model, the all-optical flip-flop stability maps of the 1550nm vertical-cavity surface-emitting laser have been studied. Theoretical results show that excellent agreement is found between theoretical and the reported experimental results in polarization switching point current which is equal to 1.95 times threshold. Furthermore, the polarization bistable region is wide which is from 1.05 to 1.95 times threshold. A new method is presented that uses power difference between two linear polarization modes as the judging criterion of trigger degree and stability maps of all-optical flip-flop operation under different injection parameters are obtained. By alternately injecting set and reset pulse with appropriate parameters, the mutual conversion switching between two polarization modes is realized, the feasibility of all-optical flip-flop operation is checked theoretically. The results show certain guiding significance on the experimental study on all optical buffer technology.

  2. Formulating appropriate statistical hypotheses for treatment comparison in clinical trial design and analysis.

    PubMed

    Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming

    2014-11-01

    We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Constraining free riding in public goods games: designated solitary punishers can sustain human cooperation

    PubMed Central

    O'Gorman, Rick; Henrich, Joseph; Van Vugt, Mark

    2008-01-01

    Much of human cooperation remains an evolutionary riddle. Unlike other animals, people frequently cooperate with non-relatives in large groups. Evolutionary models of large-scale cooperation require not just incentives for cooperation, but also a credible disincentive for free riding. Various theoretical solutions have been proposed and experimentally explored, including reputation monitoring and diffuse punishment. Here, we empirically examine an alternative theoretical proposal: responsibility for punishment can be borne by one specific individual. This experiment shows that allowing a single individual to punish increases cooperation to the same level as allowing each group member to punish and results in greater group profits. These results suggest a potential key function of leadership in human groups and provides further evidence supporting that humans will readily and knowingly behave altruistically. PMID:18812292

  4. Virtual-pulse time integral methodology: A new explicit approach for computational dynamics - Theoretical developments for general nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.

  5. Statistical characterization of spatial patterns of rainfall cells in extratropical cyclones

    NASA Astrophysics Data System (ADS)

    Bacchi, Baldassare; Ranzi, Roberto; Borga, Marco

    1996-11-01

    The assumption of a particular type of distribution of rainfall cells in space is needed for the formulation of several space-time rainfall models. In this study, weather radar-derived rain rate maps are employed to evaluate different types of spatial organization of rainfall cells in storms through the use of distance functions and second-moment measures. In particular the spatial point patterns of the local maxima of rainfall intensity are compared to a completely spatially random (CSR) point process by applying an objective distance measure. For all the analyzed radar maps the CSR assumption is rejected, indicating that at the resolution of the observation considered, rainfall cells are clustered. Therefore a theoretical framework for evaluating and fitting alternative models to the CSR is needed. This paper shows how the "reduced second-moment measure" of the point pattern can be employed to estimate the parameters of a Neyman-Scott model and to evaluate the degree of adequacy to the experimental data. Some limitations of this theoretical framework, and also its effectiveness, in comparison to the use of scaling functions, are discussed.

  6. Nonlocal transport in the presence of transport barriers

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, D.

    2013-10-01

    There is experimental, numerical, and theoretical evidence that transport in plasmas can, under certain circumstances, depart from the standard local, diffusive description. Examples include fast pulse propagation phenomena in perturbative experiments, non-diffusive scaling in L-mode plasmas, and non-Gaussian statistics of fluctuations. From the theoretical perspective, non-diffusive transport descriptions follow from the relaxation of the restrictive assumptions (locality, scale separation, and Gaussian/Markovian statistics) at the foundation of diffusive models. We discuss an alternative class of models able to capture some of the observed non-diffusive transport phenomenology. The models are based on a class of nonlocal, integro-differential operators that provide a unifying framework to describe non- Fickian scale-free transport, and non-Markovian (memory) effects. We study the interplay between nonlocality and internal transport barriers (ITBs) in perturbative transport including cold edge pulses and power modulation. Of particular interest in the nonlocal ``tunnelling'' of perturbations through ITBs. Also, flux-gradient diagrams are discussed as diagnostics to detect nonlocal transport processes in numerical simulations and experiments. Work supported by the US Department of Energy.

  7. The collaborative model of fieldwork education: a blueprint for group supervision of students.

    PubMed

    Hanson, Debra J; DeIuliis, Elizabeth D

    2015-04-01

    Historically, occupational therapists have used a traditional one-to-one approach to supervision on fieldwork. Due to the impact of managed care on health-care delivery systems, a dramatic increase in the number of students needing fieldwork placement, and the advantages of group learning, the collaborative supervision model has evolved as a strong alternative to an apprenticeship supervision approach. This article builds on the available research to address barriers to model use, applying theoretical foundations of collaborative supervision to practical considerations for academic fieldwork coordinators and fieldwork educators as they prepare for participation in group supervision of occupational therapy and occupational therapy assistant students on level II fieldwork.

  8. Origin of the moon - The collision hypothesis

    NASA Technical Reports Server (NTRS)

    Stevenson, D. J.

    1987-01-01

    Theoretical models of lunar origin involving one or more collisions between the earth and other large sun-orbiting bodies are examined in a critical review. Ten basic propositions of the collision hypothesis (CH) are listed; observational data on mass and angular momentum, bulk chemistry, volatile depletion, trace elements, primordial high temperatures, and orbital evolution are summarized; and the basic tenets of alternative models (fission, capture, and coformation) are reviewed. Consideration is given to the thermodynamics of large impacts, rheological and dynamical problems, numerical simulations based on the CH, disk evolution models, and the chemical implications of the CH. It is concluded that the sound arguments and evidence supporting the CH are not (yet) sufficient to rule out other hypotheses.

  9. Modeling BAS Dysregulation in Bipolar Disorder.

    PubMed

    Hamaker, Ellen L; Grasman, Raoul P P P; Kamphuis, Jan Henk

    2016-08-01

    Time series analysis is a technique that can be used to analyze the data from a single subject and has great potential to investigate clinically relevant processes like affect regulation. This article uses time series models to investigate the assumed dysregulation of affect that is associated with bipolar disorder. By formulating a number of alternative models that capture different kinds of theoretically predicted dysregulation, and by comparing these in both bipolar patients and controls, we aim to illustrate the heuristic potential this method of analysis has for clinical psychology. We argue that, not only can time series analysis elucidate specific maladaptive dynamics associated with psychopathology, it may also be clinically applied in symptom monitoring and the evaluation of therapeutic interventions.

  10. A channel dynamics model for real-time flood forecasting

    USGS Publications Warehouse

    Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.

    1989-01-01

    A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.

  11. Self-generated visual imagery alters the mere exposure effect.

    PubMed

    Craver-Lemley, Catherine; Bornstein, Robert F

    2006-12-01

    To determine whether self-generated visual imagery alters liking ratings of merely exposed stimuli, 79 college students were repeatedly exposed to the ambiguous duck-rabbit figure. Half the participants were told to picture the image as a duck and half to picture it as a rabbit. When participants made liking ratings of both disambiguated versions of the figure, they rated the version consistent with earlier encoding more positively than the alternate version. Implications of these findings for theoretical models of the exposure effect are discussed.

  12. Bargaining Agents in Wireless Contexts: An Alternating-Offers Protocol for Multi-issue Bilateral Negotiation in Mobile Marketplaces

    NASA Astrophysics Data System (ADS)

    Ragone, Azzurra; Ruta, Michele; di Sciascio, Eugenio; Donini, Francesco M.

    We present an approach to multi-issue bilateral negotiation for mobile commerce scenarios. The negotiation mechanism has been integrated in a semantic-based application layer enhancing both RFID and Bluetooth wireless standards. OWL DL has been used to model advertisements and relationships among issues within a shared common ontology. Finally, non standard inference services integrated with utility theory help in finding suitable agreements. We illustrate and motivate the provided theoretical framework in a wireless commerce case study.

  13. Electropyroelectric technique: A methodology free of fitting procedures for thermal effusivity determination in liquids.

    PubMed

    Ivanov, R; Marin, E; Villa, J; Gonzalez, E; Rodríguez, C I; Olvera, J E

    2015-06-01

    This paper describes an alternative methodology to determine the thermal effusivity of a liquid sample using the recently proposed electropyroelectric technique, without fitting the experimental data with a theoretical model and without having to know the pyroelectric sensor related parameters, as in most previous reported approaches. The method is not absolute, because a reference liquid with known thermal properties is needed. Experiments have been performed that demonstrate the high reliability and accuracy of the method with measurement uncertainties smaller than 3%.

  14. On the specification of structural equation models for ecological systems

    USGS Publications Warehouse

    Grace, J.B.; Michael, Anderson T.; Han, O.; Scheiner, S.M.

    2010-01-01

    The use of structural equation modeling (SEM) is often motivated by its utility for investigating complex networks of relationships, but also because of its promise as a means of representing theoretical concepts using latent variables. In this paper, we discuss characteristics of ecological theory and some of the challenges for proper specification of theoretical ideas in structural equation models (SE models). In our presentation, we describe some of the requirements for classical latent variable models in which observed variables (indicators) are interpreted as the effects of underlying causes. We also describe alternative model specifications in which indicators are interpreted as having causal influences on the theoretical concepts. We suggest that this latter nonclassical specification (which involves another variable type-the composite) will often be appropriate for ecological studies because of the multifaceted nature of our theoretical concepts. In this paper, we employ the use of meta-models to aid the translation of theory into SE models and also to facilitate our ability to relate results back to our theories. We demonstrate our approach by showing how a synthetic theory of grassland biodiversity can be evaluated using SEM and data from a coastal grassland. In this example, the theory focuses on the responses of species richness to abiotic stress and disturbance, both directly and through intervening effects on community biomass. Models examined include both those based on classical forms (where each concept is represented using a single latent variable) and also ones in which the concepts are recognized to be multifaceted and modeled as such. To address the challenge of matching SE models with the conceptual level of our theory, two approaches are illustrated, compositing and aggregation. Both approaches are shown to have merits, with the former being preferable for cases where the multiple facets of a concept have widely differing effects in the system and the latter being preferable where facets act together consistently when influencing other parts of the system. Because ecological theory characteristically deals with concepts that are multifaceted, we expect the methods presented in this paper will be useful for ecologists wishing to use SEM. ?? 2010 by the Ecological Society of America.

  15. Distribution of Base Pair Alternations in a Periodic DNA Chain: Application of Pólya Counting to a Physical System

    NASA Astrophysics Data System (ADS)

    Hillebrand, Malcolm; Paterson-Jones, Guy; Kalosakas, George; Skokos, Charalampos

    2018-03-01

    In modeling DNA chains, the number of alternations between Adenine-Thymine (AT) and Guanine-Cytosine (GC) base pairs can be considered as a measure of the heterogeneity of the chain, which in turn could affect its dynamics. A probability distribution function of the number of these alternations is derived for circular or periodic DNA. Since there are several symmetries to account for in the periodic chain, necklace counting methods are used. In particular, Polya's Enumeration Theorem is extended for the case of a group action that preserves partitioned necklaces. This, along with the treatment of generating functions as formal power series, allows for the direct calculation of the number of possible necklaces with a given number of AT base pairs, GC base pairs and alternations. The theoretically obtained probability distribution functions of the number of alternations are accurately reproduced by Monte Carlo simulations and fitted by Gaussians. The effect of the number of base pairs on the characteristics of these distributions is also discussed, as well as the effect of the ratios of the numbers of AT and GC base pairs.

  16. T-matrix modeling of linear depolarization by morphologically complex soot and soot-containing aerosols

    NASA Astrophysics Data System (ADS)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2013-07-01

    We use state-of-the-art public-domain Fortran codes based on the T-matrix method to calculate orientation and ensemble averaged scattering matrix elements for a variety of morphologically complex black carbon (BC) and BC-containing aerosol particles, with a special emphasis on the linear depolarization ratio (LDR). We explain theoretically the quasi-Rayleigh LDR peak at side-scattering angles typical of low-density soot fractals and conclude that the measurement of this feature enables one to evaluate the compactness state of BC clusters and trace the evolution of low-density fluffy fractals into densely packed aggregates. We show that small backscattering LDRs measured with ground-based, airborne, and spaceborne lidars for fresh smoke generally agree with the values predicted theoretically for fluffy BC fractals and densely packed near-spheroidal BC aggregates. To reproduce higher lidar LDRs observed for aged smoke, one needs alternative particle models such as shape mixtures of BC spheroids or cylinders.

  17. T-Matrix Modeling of Linear Depolarization by Morphologically Complex Soot and Soot-Containing Aerosols

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2013-01-01

    We use state-of-the-art public-domain Fortran codes based on the T-matrix method to calculate orientation and ensemble averaged scattering matrix elements for a variety of morphologically complex black carbon (BC) and BC-containing aerosol particles, with a special emphasis on the linear depolarization ratio (LDR). We explain theoretically the quasi-Rayleigh LDR peak at side-scattering angles typical of low-density soot fractals and conclude that the measurement of this feature enables one to evaluate the compactness state of BC clusters and trace the evolution of low-density fluffy fractals into densely packed aggregates. We show that small backscattering LDRs measured with groundbased, airborne, and spaceborne lidars for fresh smoke generally agree with the values predicted theoretically for fluffy BC fractals and densely packed near-spheroidal BC aggregates. To reproduce higher lidar LDRs observed for aged smoke, one needs alternative particle models such as shape mixtures of BC spheroids or cylinders.

  18. [A multi-measure analysis of the similarity, attraction, and compromise effects in multi-attribute decision making].

    PubMed

    Tsuzuki, Takashi; Matsui, Hiroshi; Kikuchi, Manabu

    2012-12-01

    In multi-attribute decision making, the similarity, attraction, and compromise effects warrant specific investigation as they cause violations of principles in rational choice. In order to investigate these three effects simultaneously, we assigned 145 undergraduates to three context effect conditions. We requested them to solve the same 20 hypothetical purchase problems, each of which had three alternatives described along two attributes. We measured their choices, confidence ratings, and response times. We found that manipulating the third alternative had significant context effects for choice proportions and confidence ratings in all three conditions. Furthermore, the attraction effect was the most prominent with regard to choice proportions. In the compromise effect condition, although the choice proportion of the third alternative was high, the confidence rating was low and the response time was long. These results indicate that the relationship between choice proportions and confidence ratings requires further theoretical investigation. They also suggest that a combination of experimental and modeling studies is imperative to reveal the mechanisms underlying the context effects in multi-attribute, multi-alternative decision making.

  19. Evaluating scaling models in biology using hierarchical Bayesian approaches

    PubMed Central

    Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S

    2009-01-01

    Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621

  20. The effect of energy reserves on social foraging: hungry sparrows scrounge more.

    PubMed

    Lendvai, Adám Z; Barta, Zoltán; Liker, András; Bókony, Veronika

    2004-12-07

    Animals often use alternative strategies when they compete for resources, but it is unclear in most cases what factors determine the actual tactic followed by individuals. Although recent models suggest that the internal state of animals may be particularly important in tactic choice, the effects of state variables on the use of alternative behavioural forms have rarely been demonstrated. In this study, using experimental wind exposure to increase overnight energy expenditure, we show that flock-feeding house sparrows (Passer domesticus) with lowered energy reserves increase their use of scrounging (exploiting others' food findings) during their first feed of the day. This result is in accordance with the prediction of a state-dependent model of use of social foraging tactics. We also show that scrounging provides less variable feeding rates and patch finding times than the alternative tactic. These latter results support the theoretical assumption that scrounging is a risk-averse tactic, i.e. it reduces the risk of immediate starvation. As the level of energy reserves predicts the use of social foraging tactics, we propose that selection should favour individuals that monitor the internal state of flock mates and use this information to adjust their own tactic choice.

  1. Alternatives to Goodman and Kruskal's Lambda.

    ERIC Educational Resources Information Center

    Stavig, Gordon R.

    1979-01-01

    Lambda and kappa coefficients of nominal scale association are developed for research hypotheses that involve predictions of modality, agreement, or some theoretically specified configuration. The proposed new coefficient is offered as an alternative to Goodman and Kruskal's lambda. (Author/CTM)

  2. Information-Theoretic Uncertainty of SCFG-Modeled Folding Space of The Non-coding RNA

    PubMed Central

    Manzourolajdad, Amirhossein; Wang, Yingfeng; Shaw, Timothy I.; Malmberg, Russell L.

    2012-01-01

    RNA secondary structure ensembles define probability distributions for alternative equilibrium secondary structures of an RNA sequence. Shannon’s Entropy is a measure for the amount of diversity present in any ensemble. In this work, Shannon’s entropy of the SCFG ensemble on an RNA sequence is derived and implemented in polynomial time for both structurally ambiguous and unambiguous grammars. Micro RNA sequences generally have low folding entropy, as previously discovered. Surprisingly, signs of significantly high folding entropy were observed in certain ncRNA families. More effective models coupled with targeted randomization tests can lead to a better insight into folding features of these families. PMID:23160142

  3. Bounded energy states in homogeneous turbulent shear flow - An alternative view

    NASA Technical Reports Server (NTRS)

    Bernard, P. S.; Speziale, C. G.

    1992-01-01

    The equilibrium structure of homogeneous turbulent shear flow is investigated from a theoretical standpoint. Existing turbulence models, in apparent agreement with physical and numerical experiments, predict an unbounded exponential time growth of the turbulent kinetic energy and dissipation rate; only the anisotropy tensor and turbulent time scale reach a structural equilibrium. It is shown that if a residual vortex stretching term is maintained in the dissipation rate transport equation, then there can exist equilibrium solutions, with bounded energy states, where the turbulence production is balanced by its dissipation. Illustrative calculations are presented for a k-epsilon model modified to account for net vortex stretching.

  4. It’s The Information!

    PubMed Central

    Ward, Ryan D.; Gallistel, C.R.; Balsam, Peter D

    2013-01-01

    Learning in conditioning protocols has long been thought to depend on temporal contiguity between the conditioned stimulus and the unconditioned stimulus. This conceptualization has led to a preponderance of associative models of conditioning. We suggest that trial-based associative models that posit contiguity as the primary principle underlying learning are flawed, and provide a brief review of an alternative, information theoretic approach to conditioning. The information that a CS conveys about the timing of the next US can be derived from the temporal parameters of a conditioning protocol. According to this view, a CS will support conditioned responding if, and only if, it reduces uncertainty about the timing of the next US. PMID:23384660

  5. A Theoretical Framework for Studying Adolescent Contraceptive Use.

    ERIC Educational Resources Information Center

    Urberg, Kathryn A.

    1982-01-01

    Presents a theoretical framework for viewing adolescent contraceptive usage. The problem-solving process is used for developmentally examining the competencies that must be present for effective contraceptive use, including: problem recognition, motivation, generation of alternatives, decision making and implementation. Each aspect is discussed…

  6. Using Epidemiological Principles to Explain Fungicide Resistance Management Tactics: Why do Mixtures Outperform Alternations?

    PubMed

    Elderfield, James A D; Lopez-Ruiz, Francisco J; van den Bosch, Frank; Cunniffe, Nik J

    2018-07-01

    Whether fungicide resistance management is optimized by spraying chemicals with different modes of action as a mixture (i.e., simultaneously) or in alternation (i.e., sequentially) has been studied by experimenters and modelers for decades. However, results have been inconclusive. We use previously parameterized and validated mathematical models of wheat Septoria leaf blotch and grapevine powdery mildew to test which tactic provides better resistance management, using the total yield before resistance causes disease control to become economically ineffective ("lifetime yield") to measure effectiveness. We focus on tactics involving the combination of a low-risk and a high-risk fungicide, and the case in which resistance to the high-risk chemical is complete (i.e., in which there is no partial resistance). Lifetime yield is then optimized by spraying as much low-risk fungicide as is permitted, combined with slightly more high-risk fungicide than needed for acceptable initial disease control, applying these fungicides as a mixture. That mixture rather than alternation gives better performance is invariant to model parameterization and structure, as well as the pathosystem in question. However, if comparison focuses on other metrics, e.g., lifetime yield at full label dose, either mixture or alternation can be optimal. Our work shows how epidemiological principles can explain the evolution of fungicide resistance, and also highlights a theoretical framework to address the question of whether mixture or alternation provides better resistance management. It also demonstrates that precisely how spray tactics are compared must be given careful consideration. [Formula: see text] Copyright © 2018 The Author(s). This is an open access article distributed under the CC BY 4.0 International license .

  7. Designing websites for persons with cognitive deficits: Design and usability of a psychoeducational intervention for persons with severe mental illness.

    PubMed Central

    Rotondi, Armando J.; Sinkule, Jennifer; Haas, Gretchen L.; Spring, Michael B.; Litschge, Christine M.; Newhill, Christina E.; Ganguli, Rohan; Anderson, Carol M.

    2013-01-01

    The purpose of this study was to develop an understanding of the design elements that influence the ability of persons with severe mental illness (SMI) and cognitive deficits to use a website, and to use this knowledge to design a web-based telehealth application to deliver a psychoeducation program to persons with schizophrenia and their families. Usability testing was conducted with 98 persons with SMI. First, individual website design elements were tested. Based on these results, theoretical website design models were used to create several alternative websites. These designs were tested for their ability to facilitate use by persons with SMI. The final website design is presented. The results indicate that commonly prescribed design models and guidelines produce websites that are poorly suited and confusing to persons with SMI. Our findings suggest an alternative model that should be considered when designing websites and other telehealth interventions for this population. Implications for future studies addressing the characteristics of accessible designs for persons with SMI and cognitive deficits are discussed. PMID:26321884

  8. Modeling Microscale Electro-thermally Induced Vortex Flows

    NASA Astrophysics Data System (ADS)

    Paul, Rajorshi; Tang, Tian; Kumar, Aloke

    2017-11-01

    In presence of a high frequency alternating electric field and a laser induced heat source, vortex flows are generated inside micro-channels. Such electro-thermally influenced micro-vortices can be used for manipulating nano-particles, programming colloidal assemblies, trapping biological cells as well as for fabricating designed bacterial biofilms. In this study, a theoretical model is developed for microscale electro-thermally induced vortex flows with multiple heat sources. Semi-analytical solutions are obtained, using Hankel transformation and linear superposition, for the temperature, pressure and velocity fields. The effect of material properties such as electrical and thermal conductivities, as well as experimental parameters such as the frequency and strength of the alternating electric field, and the intensity and heating profile of the laser source, are systematically investigated. Resolution for a pair of laser sources is determined by analyzing the strength of the micro-vortices under the influence of two heating sources. Results from this work will provide useful insights into the design of efficient optical tweezers and Rapid Electrokinetic Patterning techniques.

  9. Biophysical comparison of ATP synthesis mechanisms shows a kinetic advantage for the rotary process.

    PubMed

    Anandakrishnan, Ramu; Zhang, Zining; Donovan-Maiye, Rory; Zuckerman, Daniel M

    2016-10-04

    The ATP synthase (F-ATPase) is a highly complex rotary machine that synthesizes ATP, powered by a proton electrochemical gradient. Why did evolution select such an elaborate mechanism over arguably simpler alternating-access processes that can be reversed to perform ATP synthesis? We studied a systematic enumeration of alternative mechanisms, using numerical and theoretical means. When the alternative models are optimized subject to fundamental thermodynamic constraints, they fail to match the kinetic ability of the rotary mechanism over a wide range of conditions, particularly under low-energy conditions. We used a physically interpretable, closed-form solution for the steady-state rate for an arbitrary chemical cycle, which clarifies kinetic effects of complex free-energy landscapes. Our analysis also yields insights into the debated "kinetic equivalence" of ATP synthesis driven by transmembrane pH and potential difference. Overall, our study suggests that the complexity of the F-ATPase may have resulted from positive selection for its kinetic advantage.

  10. Dynamic Modeling and Simulation of a Rotational Inverted Pendulum

    NASA Astrophysics Data System (ADS)

    Duart, J. L.; Montero, B.; Ospina, P. A.; González, E.

    2017-01-01

    This paper presents an alternative way to the dynamic modeling of a rotational inverted pendulum using the classic mechanics known as Euler-Lagrange allows to find motion equations that describe our model. It also has a design of the basic model of the system in SolidWorks software, which based on the material and dimensions of the model provides some physical variables necessary for modeling. In order to verify the theoretical results, It was made a contrast between the solutions obtained by simulation SimMechanics-Matlab and the system of equations Euler-Lagrange, solved through ODE23tb method included in Matlab bookstores for solving equations systems of the type and order obtained. This article comprises a pendulum trajectory analysis by a phase space diagram that allows the identification of stable and unstable regions of the system.

  11. Individual-based modelling and control of bovine brucellosis

    NASA Astrophysics Data System (ADS)

    Nepomuceno, Erivelton G.; Barbosa, Alípio M.; Silva, Marcos X.; Perc, Matjaž

    2018-05-01

    We present a theoretical approach to control bovine brucellosis. We have used individual-based modelling, which is a network-type alternative to compartmental models. Our model thus considers heterogeneous populations, and spatial aspects such as migration among herds and control actions described as pulse interventions are also easily implemented. We show that individual-based modelling reproduces the mean field behaviour of an equivalent compartmental model. Details of this process, as well as flowcharts, are provided to facilitate the reproduction of the presented results. We further investigate three numerical examples using real parameters of herds in the São Paulo state of Brazil, in scenarios which explore eradication, continuous and pulsed vaccination and meta-population effects. The obtained results are in good agreement with the expected behaviour of this disease, which ultimately showcases the effectiveness of our theory.

  12. A computational model of self-efficacy's various effects on performance: Moving the debate forward.

    PubMed

    Vancouver, Jeffrey B; Purl, Justin D

    2017-04-01

    Self-efficacy, which is one's belief in one's capacity, has been found to both positively and negatively influence effort and performance. The reasons for these different effects have been a major topic of debate among social-cognitive and perceptual control theorists. In particular, the findings of various self-efficacy effects has been motivated by a perceptual control theory view of self-regulation that social-cognitive theorists' question. To provide more clarity to the theoretical arguments, a computational model of the multiple processes presumed to create the positive, negative, and null effects for self-efficacy is presented. Building on an existing computational model of goal choice that produces a positive effect for self-efficacy, the current article adds a symbolic processing structure used during goal striving that explains the negative self-efficacy effect observed in recent studies. Moreover, the multiple processes, operating together, allow the model to recreate the various effects found in a published study of feedback ambiguity's moderating role on the self-efficacy to performance relationship (Schmidt & DeShon, 2010). Discussion focuses on the implications of the model for the self-efficacy debate, alternative computational models, the overlap between control theory and social-cognitive theory explanations, the value of using computational models for resolving theoretical disputes, and future research and directions the model inspires. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Quantitative evaluation of thermodynamic parameters of Li-Sn alloys related to their use in fusion reactor

    NASA Astrophysics Data System (ADS)

    Krasin, V. P.; Soyustova, S. I.

    2018-07-01

    Along with other liquid metals liquid lithium-tin alloys can be considered as an alternative to the use of solid plasma facing components of a future fusion reactor. Therefore, parameters characterizing both the ability to retain hydrogen isotopes and those that determine the extraction of tritium from a liquid metal can be of particular importance. Theoretical correlations based on the coordination cluster model have been used to obtain Sieverts' constants for solutions of hydrogen in liquid Li-Sn alloys. The results of theoretical computations are compared with the previously published experimental values for two alloys of the Li-Sn system. The Butler equation in combination with the equations describing the thermodynamic potentials of a binary solution is used to calculate the surface composition and surface tension of liquid Li-Sn alloys.

  14. Numerical modeling of two-photon focal modulation microscopy with a sinusoidal phase filter.

    PubMed

    Chen, Rui; Shen, Shuhao; Chen, Nanguang

    2018-05-01

    A spatiotemporal phase modulator (STPM) is theoretically investigated using the vectorial diffraction theory. The STPM is equivalent to a time-dependent phase-only pupil filter that alternates between a homogeneous filter and a stripe-shaped filter with a sinusoidal phase distribution. It is found that two-photon focal modulation microscopy (TPFMM) using this STPM can significantly suppress the background contribution from out-of-focus ballistic excitation and achieve almost the same resolution as two-photon microscopy. The modulation depth is also evaluated and a compromise exists between the signal-to-background ratio and signal-to-noise ratio. The theoretical investigations provide important insights into future implementations of TPFMM and its potential to further extend the penetration depth of nonlinear microscopy in imaging multiple-scattering biological tissues. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  15. How Do Internal and External CSR Affect Employees' Organizational Identification? A Perspective from the Group Engagement Model

    PubMed Central

    Hameed, Imran; Riaz, Zahid; Arain, Ghulam A.; Farooq, Omer

    2016-01-01

    The literature examines the impact of firms' corporate social responsibility (CSR) activities on employees' organizational identification without considering that such activities tend to have different targets. This study explores how perceived external CSR (efforts directed toward external stakeholders) and perceived internal CSR (efforts directed toward employees) activities influence employees' organizational identification. In so doing, it examines the alternative underlying mechanisms through which perceived external and internal CSR activities build employees' identification. Applying the taxonomy prescribed by the group engagement model, the study argues that the effects of perceived external and internal CSR flow through two competing mechanisms: perceived external prestige and perceived internal respect, respectively. Further, it is suggested that calling orientation (how employees see their work contributions) moderates the effects induced by these alternative forms of CSR. The model draws on survey data collected from a sample of 414 employees across five large multinationals in Pakistan. The results obtained using structural equation modeling support these hypotheses, reinforcing the notion that internal and external CSR operate through different mediating mechanisms and more interestingly employees' calling orientation moderates these relationships to a significant degree. Theoretical contributions and practical implications of results are discussed in detail. PMID:27303345

  16. How Do Internal and External CSR Affect Employees' Organizational Identification? A Perspective from the Group Engagement Model.

    PubMed

    Hameed, Imran; Riaz, Zahid; Arain, Ghulam A; Farooq, Omer

    2016-01-01

    The literature examines the impact of firms' corporate social responsibility (CSR) activities on employees' organizational identification without considering that such activities tend to have different targets. This study explores how perceived external CSR (efforts directed toward external stakeholders) and perceived internal CSR (efforts directed toward employees) activities influence employees' organizational identification. In so doing, it examines the alternative underlying mechanisms through which perceived external and internal CSR activities build employees' identification. Applying the taxonomy prescribed by the group engagement model, the study argues that the effects of perceived external and internal CSR flow through two competing mechanisms: perceived external prestige and perceived internal respect, respectively. Further, it is suggested that calling orientation (how employees see their work contributions) moderates the effects induced by these alternative forms of CSR. The model draws on survey data collected from a sample of 414 employees across five large multinationals in Pakistan. The results obtained using structural equation modeling support these hypotheses, reinforcing the notion that internal and external CSR operate through different mediating mechanisms and more interestingly employees' calling orientation moderates these relationships to a significant degree. Theoretical contributions and practical implications of results are discussed in detail.

  17. Bayesian Factor Analysis as a Variable Selection Problem: Alternative Priors and Consequences

    PubMed Central

    Lu, Zhao-Hua; Chow, Sy-Miin; Loken, Eric

    2016-01-01

    Factor analysis is a popular statistical technique for multivariate data analysis. Developments in the structural equation modeling framework have enabled the use of hybrid confirmatory/exploratory approaches in which factor loading structures can be explored relatively flexibly within a confirmatory factor analysis (CFA) framework. Recently, a Bayesian structural equation modeling (BSEM) approach (Muthén & Asparouhov, 2012) has been proposed as a way to explore the presence of cross-loadings in CFA models. We show that the issue of determining factor loading patterns may be formulated as a Bayesian variable selection problem in which Muthén and Asparouhov’s approach can be regarded as a BSEM approach with ridge regression prior (BSEM-RP). We propose another Bayesian approach, denoted herein as the Bayesian structural equation modeling with spike and slab prior (BSEM-SSP), which serves as a one-stage alternative to the BSEM-RP. We review the theoretical advantages and disadvantages of both approaches and compare their empirical performance relative to two modification indices-based approaches and exploratory factor analysis with target rotation. A teacher stress scale data set (Byrne, 2012; Pettegrew & Wolf, 1982) is used to demonstrate our approach. PMID:27314566

  18. Effect of level difference between left and right vocal folds on phonation: Physical experiment and theoretical study.

    PubMed

    Tokuda, Isao T; Shimamura, Ryo

    2017-08-01

    As an alternative factor to produce asymmetry between left and right vocal folds, the present study focuses on level difference, which is defined as the distance between the upper surfaces of the bilateral vocal folds in the inferior-superior direction. Physical models of the vocal folds were utilized to study the effect of the level difference on the phonation threshold pressure. A vocal tract model was also attached to the vocal fold model. For two types of different models, experiments revealed that the phonation threshold pressure tended to increase as the level difference was extended. Based upon a small amplitude approximation of the vocal fold oscillations, a theoretical formula was derived for the phonation threshold pressure. This theory agrees with the experiments, especially when the phase difference between the left and right vocal folds is not extensive. Furthermore, an asymmetric two-mass model was simulated with a level difference to validate the experiments as well as the theory. The primary conclusion is that the level difference has a potential effect on voice production especially for patients with an extended level of vertical difference in the vocal folds, which might be taken into account for the diagnosis of voice disorders.

  19. Future orientation in the self-system: possible selves, self-regulation, and behavior.

    PubMed

    Hoyle, Rick H; Sherrill, Michelle R

    2006-12-01

    Possible selves are representations of the self in the future. Early theoretical accounts of the construct suggested that possible selves directly influence motivation and behavior. We propose an alternative view of possible selves as a component in self-regulatory processes through which motivation and behavior are influenced. We demonstrate the advantages of this conceptualization in two studies that test predictions generated from theoretical models of self-regulation in which the possible selves construct could be embedded. In one study, we show how viewing possible selves as a source of behavioral standards in a control-process model of self-regulation yields support for a set of predictions about the influence of possible selves on current behavior. In the other study, we examine possible selves in the context of an interpersonal model of self-regulation, showing strong evidence of concern for relational value in freely generated hoped-for and feared selves. These findings suggest that the role of possible selves in motivation and behavior can be profitably studied in models that fully specify the process of self-regulation and that those models can be enriched by a consideration of future-oriented self-representations. We offer additional recommendations for strengthening research on possible selves and self-regulation.

  20. Generalized Information Theory Meets Human Cognition: Introducing a Unified Framework to Model Uncertainty and Information Search.

    PubMed

    Crupi, Vincenzo; Nelson, Jonathan D; Meder, Björn; Cevolani, Gustavo; Tentori, Katya

    2018-06-17

    Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism. Copyright © 2018 Cognitive Science Society, Inc.

  1. [The trajectory towards alternative medicines: an analysis of health professionals' social representations].

    PubMed

    Queiroz, M S

    2000-01-01

    This article focuses on social representations of alternative medicines by a group of professors from the School of Medicine and health professionals from the public health system in the city of Campinas, São Paulo, basically physicians and nurses. The article also emphasizes personal trajectories by which these health professionals opted for a dissident theoretical and practical perspective vis-à-vis the hegemonic positivist scientific medical paradigm. The research methods were mainly ethnographic, from a phenomenological perspective. The article concludes by sustaining (in theoretical terms) the importance of these dissident perspectives for scientific development.

  2. A graph-theoretic method to quantify the airline route authority

    NASA Technical Reports Server (NTRS)

    Chan, Y.

    1979-01-01

    The paper introduces a graph-theoretic method to quantify the legal statements in route certificate which specifies the airline routing restrictions. All the authorized nonstop and multistop routes, including the shortest time routes, can be obtained, and the method suggests profitable route structure alternatives to airline analysts. This method to quantify the C.A.B. route authority was programmed in a software package, Route Improvement Synthesis and Evaluation, and demonstrated in a case study with a commercial airline. The study showed the utility of this technique in suggesting route alternatives and the possibility of improvements in the U.S. route system.

  3. Exhaust pressure pulsation observation from turbocharger instantaneous speed measurement

    NASA Astrophysics Data System (ADS)

    Macián, V.; Luján, J. M.; Bermúdez, V.; Guardiola, C.

    2004-06-01

    In internal combustion engines, instantaneous exhaust pressure measurements are difficult to perform in a production environment. The high temperature of the exhaust manifold and its pulsating character make its application to exhaust gas recirculation control algorithms impossible. In this paper an alternative method for estimating the exhaust pressure pulsation is presented. A numerical model is built which enables the exhaust pressure pulses to be predicted from instantaneous turbocharger speed measurements. Although the model is data based, a theoretical description of the process is also provided. This combined approach makes it possible to export the model for different engine operating points. Also, compressor contribution in the turbocharger speed pulsation is discussed extensively. The compressor contribution is initially neglected, and effects of this simplified approach are analysed.

  4. Financial Data Analysis by means of Coupled Continuous-Time Random Walk in Rachev-Rűschendorf Model

    NASA Astrophysics Data System (ADS)

    Jurlewicz, A.; Wyłomańska, A.; Żebrowski, P.

    2008-09-01

    We adapt the continuous-time random walk formalism to describe asset price evolution. We expand the idea proposed by Rachev and Rűschendorf who analyzed the binomial pricing model in the discrete time with randomization of the number of price changes. As a result, in the framework of the proposed model we obtain a mixture of the Gaussian and a generalized arcsine laws as the limiting distribution of log-returns. Moreover, we derive an European-call-option price that is an extension of the Black-Scholes formula. We apply the obtained theoretical results to model actual financial data and try to show that the continuous-time random walk offers alternative tools to deal with several complex issues of financial markets.

  5. Long-Term Evaluation of Ocean Tidal Variation Models of Polar Motion and UT1

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Balidakis, Kyriakos; Belda, Santiago; Nilsson, Tobias; Hagedoorn, Jan; Schuh, Harald

    2018-04-01

    Recent improvements in the development of VLBI (very long baseline interferometry) and other space geodetic techniques such as the global navigation satellite systems (GNSS) require very precise a-priori information of short-period (daily and sub-daily) Earth rotation variations. One significant contribution to Earth rotation is caused by the diurnal and semi-diurnal ocean tides. Within this work, we developed a new model for the short-period ocean tidal variations in Earth rotation, where the ocean tidal angular momentum model and the Earth rotation variation have been setup jointly. Besides the model of the short-period variation of the Earth's rotation parameters (ERP), based on the empirical ocean tide model EOT11a, we developed also ERP models, that are based on the hydrodynamic ocean tide models FES2012 and HAMTIDE. Furthermore, we have assessed the effect of uncertainties in the elastic Earth model on the resulting ERP models. Our proposed alternative ERP model to the IERS 2010 conventional model considers the elastic model PREM and 260 partial tides. The choice of the ocean tide model and the determination of the tidal velocities have been identified as the main uncertainties. However, in the VLBI analysis all models perform on the same level of accuracy. From these findings, we conclude that the models presented here, which are based on a re-examined theoretical description and long-term satellite altimetry observation only, are an alternative for the IERS conventional model but do not improve the geodetic results.

  6. Wave‐induced Hydraulic Forces on Submerged Aquatic Plants in Shallow Lakes

    PubMed Central

    SCHUTTEN, J.; DAINTY, J.; DAVY, A. J.

    2004-01-01

    • Background and Aims Hydraulic pulling forces arising from wave action are likely to limit the presence of freshwater macrophytes in shallow lakes, particularly those with soft sediments. The aim of this study was to develop and test experimentally simple models, based on linear wave theory for deep water, to predict such forces on individual shoots. • Methods Models were derived theoretically from the action of the vertical component of the orbital velocity of the waves on shoot size. Alternative shoot‐size descriptors (plan‐form area or dry mass) and alternative distributions of the shoot material along its length (cylinder or inverted cone) were examined. Models were tested experimentally in a flume that generated sinusoidal waves which lasted 1 s and were up to 0·2 m high. Hydraulic pulling forces were measured on plastic replicas of Elodea sp. and on six species of real plants with varying morphology (Ceratophyllum demersum, Chara intermedia, Elodea canadensis, Myriophyllum spicatum, Potamogeton natans and Potamogeton obtusifolius). • Key Results Measurements on the plastic replicas confirmed predicted relationships between force and wave phase, wave height and plant submergence depth. Predicted and measured forces were linearly related over all combinations of wave height and submergence depth. Measured forces on real plants were linearly related to theoretically derived predictors of the hydraulic forces (integrals of the products of the vertical orbital velocity raised to the power 1·5 and shoot size). • Conclusions The general applicability of the simplified wave equations used was confirmed. Overall, dry mass and plan‐form area performed similarly well as shoot‐size descriptors, as did the conical or cylindrical models of shoot distribution. The utility of the modelling approach in predicting hydraulic pulling forces from relatively simple plant and environmental measurements was validated over a wide range of forces, plant sizes and species. PMID:14988098

  7. Cosmological tests of modified gravity.

    PubMed

    Koyama, Kazuya

    2016-04-01

    We review recent progress in the construction of modified gravity models as alternatives to dark energy as well as the development of cosmological tests of gravity. Einstein's theory of general relativity (GR) has been tested accurately within the local universe i.e. the Solar System, but this leaves the possibility open that it is not a good description of gravity at the largest scales in the Universe. This being said, the standard model of cosmology assumes GR on all scales. In 1998, astronomers made the surprising discovery that the expansion of the Universe is accelerating, not slowing down. This late-time acceleration of the Universe has become the most challenging problem in theoretical physics. Within the framework of GR, the acceleration would originate from an unknown dark energy. Alternatively, it could be that there is no dark energy and GR itself is in error on cosmological scales. In this review, we first give an overview of recent developments in modified gravity theories including f(R) gravity, braneworld gravity, Horndeski theory and massive/bigravity theory. We then focus on common properties these models share, such as screening mechanisms they use to evade the stringent Solar System tests. Once armed with a theoretical knowledge of modified gravity models, we move on to discuss how we can test modifications of gravity on cosmological scales. We present tests of gravity using linear cosmological perturbations and review the latest constraints on deviations from the standard [Formula: see text]CDM model. Since screening mechanisms leave distinct signatures in the non-linear structure formation, we also review novel astrophysical tests of gravity using clusters, dwarf galaxies and stars. The last decade has seen a number of new constraints placed on gravity from astrophysical to cosmological scales. Thanks to on-going and future surveys, cosmological tests of gravity will enjoy another, possibly even more, exciting ten years.

  8. Probabilistic choice models in health-state valuation research: background, theories, assumptions and applications.

    PubMed

    Arons, Alexander M M; Krabbe, Paul F M

    2013-02-01

    Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the area of health evaluation. They increasingly use discrete choice models based on random utility theory to derive values for healthcare goods or services. Recent attempts have been made to use discrete choice models as an alternative method to derive values for health states. In this article, various probabilistic choice models are described according to their underlying theory. A historical overview traces their development and applications in diverse fields. The discussion highlights some theoretical and technical aspects of the choice models and their similarity and dissimilarity. The objective of the article is to elucidate the position of each model and their applications for health-state valuation.

  9. Lexicons, contexts, events, and images: commentary on Elman (2009) from the perspective of dual coding theory.

    PubMed

    Paivio, Allan; Sadoski, Mark

    2011-01-01

    Elman (2009) proposed that the traditional role of the mental lexicon in language processing can largely be replaced by a theoretical model of schematic event knowledge founded on dynamic context-dependent variables. We evaluate Elman's approach and propose an alternative view, based on dual coding theory and evidence that modality-specific cognitive representations contribute strongly to word meaning and language performance across diverse contexts which also have effects predictable from dual coding theory. Copyright © 2010 Cognitive Science Society, Inc.

  10. Progress in understanding heavy-ion stopping

    NASA Astrophysics Data System (ADS)

    Sigmund, P.; Schinner, A.

    2016-09-01

    We report some highlights of our work with heavy-ion stopping in the energy range where Bethe stopping theory breaks down. Main tools are our binary stopping theory (PASS code), the reciprocity principle, and Paul's data base. Comparisons are made between PASS and three alternative theoretical schemes (CasP, HISTOP and SLPA). In addition to equilibrium stopping we discuss frozen-charge stopping, deviations from linear velocity dependence below the Bragg peak, application of the reciprocity principle in low-velocity stopping, modeling of equilibrium charges, and the significance of the so-called effective charge.

  11. Plumes in the mantle. [free air and isostatic gravity anomalies for geophysical interpretation

    NASA Technical Reports Server (NTRS)

    Khan, M. A.

    1973-01-01

    Free air and isostatic gravity anomalies for the purposes of geophysical interpretation are presented. Evidence for the existance of hotspots in the mantle is reviewed. The prosposed locations of these hotspots are not always associated with positive gravity anomalies. Theoretical analysis based on simplified flow models for the plumes indicates that unless the frictional viscosities are several orders of magnitude smaller than the present estimates of mantle viscosity or alternately, the vertical flows are reduced by about two orders of magnitude, the plume flow will generate implausibly high temperatures.

  12. Anomalous vibrational modes in acetanilide: a F.D.S. incoherent inelastic neutron scattering study

    NASA Astrophysics Data System (ADS)

    Barthes, Mariette; Eckert, Juergen; Johnson, Susanna W.; Moret, Jacques; Swanson, Basil I.; Unkefer, Clifford J.

    The origin of the anomalous infra-red and Raman modes in acetanilide (C6H5NHCOCH3, or ACN)(1) , remains a subject of considerable controversy. One family of theoretical models involves Davydov-like solitons (2) nonlinear vibrational coupling (3), or "polaronic" localized modes (4)(5). An alternative interpretation of the extra-bands in terms of a Fermi resonance was proposed (6) and recently the existence of slightly non-degenerate hydrogen atom configurations (7) in the H-bond was suggested as an explanation for the anomalies.

  13. OCD: obsessive-compulsive … disgust? The role of disgust in obsessive-compulsive disorder.

    PubMed

    Bhikram, Tracy; Abi-Jaoude, Elia; Sandor, Paul

    2017-09-01

    Recent research has identified the important role of disgust in the symptomatology of obsessive-compulsive disorder (OCD). Exaggerated and inappropriate disgust reactions may drive some of the symptoms of OCD, and in some cases, may even eclipse feelings of anxiety. This paper reviews behavioural and neuroimaging research that recognizes the prominent role of disgust in contributing to OCD symptoms, especially contamination-based symptoms. We discuss how elevated behavioural and biological markers of disgust reported in OCD populations support the need for alternative clinical treatment strategies and theoretical models of OCD.

  14. Tracing Cosmic Dawn

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia

    2018-05-01

    Observational effort is on the way to probe the 21-cm of neutral hydrogen from the epochs of Reionization and Cosmic Dawn. Our current poor knowledge of high redshift astrophysics results in a large uncertainty in the theoretically predicted 21-cm signal. A recent parameter study that is highlighted here explores the variety of 21-cm signals resulting from viable astrophysical scenarios. Model-independent relations between the shape of the signal and the underlying astrophysics are discussed. Finally, I briefly note on possible alternative probes of the high redshift Universe, specifically Fast Radio Bursts.

  15. Education as a Factor of Intercultural Communication

    ERIC Educational Resources Information Center

    Gojkov, Grozdanka

    2011-01-01

    The paper considers alternative constructivism as a possibility of theoretical starting point regarding education as a factor of intercultural communication. The introductory part of the paper deals with Kelly's personal construct theory permeating the arguments in favour of the theoretical research thesis referring to the issue of the extent the…

  16. Study of modeling aspects of long period fiber grating using three-layer fiber geometry

    NASA Astrophysics Data System (ADS)

    Singh, Amit

    2015-03-01

    The author studied and demonstrated the various modeling aspects of long period fiber grating (LPFG) such as the core effective index, cladding effective index, coupling coefficient, coupled mode theory, and transmission spectrum of the LPFG using three-layer fiber geometry. Actually, there are two different techniques used for theoretical modeling of the long period fiber grating. The first technique was used by Vengsarkar et al who described the phenomenon of long-period fiber gratings, and the second technique was reported by Erdogan who revealed the inaccuracies and shortcomings of the original method, thereby providing an accurate and updated alternative. The main difference between these two different approaches lies in their fiber geometry. Venserkar et al used two-layer fiber geometry which is simple but employs weakly guided approximation, whereas Erdogan used three-layer fiber geometry which is complex but also the most accurate technique for theoretical study of the LPFG. The author further discussed about the behavior of the transmission spectrum by altering different grating parameters such as the grating length, ultraviolet (UV) induced-index change, and grating period to achieve the desired flexibility. The author simulated the various results with the help of MATLAB.

  17. Statistical modeling, detection, and segmentation of stains in digitized fabric images

    NASA Astrophysics Data System (ADS)

    Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.

    2007-02-01

    This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.

  18. The decision to conduct a head-to-head comparative trial: a game-theoretic analysis.

    PubMed

    Mansley, Edward C; Elbasha, Elamin H; Teutsch, Steven M; Berger, Marc L

    2007-01-01

    Recent Medicare legislation calls on the Agency for Healthcare Research and Quality to conduct research related to the comparative effectiveness of health care items and services, including prescription drugs. This reinforces earlier calls for head-to-head comparative trials of clinically relevant treatment alternatives. Using a game-theoretic model, the authors explore the decision of pharmaceutical companies to conduct such trials. The model suggests that an important factor affecting this decision is the potential loss in market share and profits following a result of inferiority or comparability. This hidden cost is higher for the market leader than the market follower, making it less likely that the leader will choose to conduct a trial. The model also suggests that in a full-information environment, it will never be the case that both firms choose to conduct such a trial. Furthermore, if market shares and the probability of proving superiority are similar for both firms, it is quite possible that neither firm will choose to conduct a trial. Finally, the results indicate that incentives that offset the direct cost of a trial can prevent a no-trial equilibrium, even when both firms face the possibility of an inferior outcome.

  19. Estimating habitat carrying capacity for migrating and wintering waterfowl: Considerations, pitfalls and improvements

    USGS Publications Warehouse

    Williams, Christopher; Dugger, Bruce D.; Brasher, Michael G.; Coluccy, John M.; Cramer, Dane M.; Eadie, John M.; Gray, Matthew J.; Hagy, Heath M.; Livolsi, Mark; McWilliams, Scott R.; Petrie, Matthew; Soulliere, Gregory J.; Tirpak, John M.; Webb, Elisabeth B.

    2014-01-01

    Population-based habitat conservation planning for migrating and wintering waterfowl in North America is carried out by habitat Joint Venture (JV) initiatives and is based on the premise that food can limit demography (i.e. food limitation hypothesis). Consequently, planners use bioenergetic models to estimate food (energy) availability and population-level energy demands at appropriate spatial and temporal scales, and translate these values into regional habitat objectives. While simple in principle, there are both empirical and theoretical challenges associated with calculating energy supply and demand including: 1) estimating food availability, 2) estimating the energy content of specific foods, 3) extrapolating site-specific estimates of food availability to landscapes for focal species, 4) applicability of estimates from a single species to other species, 5) estimating resting metabolic rate, 6) estimating cost of daily behaviours, and 7) estimating costs of thermoregulation or tissue synthesis. Most models being used are daily ration models (DRMs) whose set of simplifying assumptions are well established and whose use is widely accepted and feasible given the empirical data available to populate such models. However, DRMs do not link habitat objectives to metrics of ultimate ecological importance such as individual body condition or survival, and largely only consider food-producing habitats. Agent-based models (ABMs) provide a possible alternative for creating more biologically realistic models under some conditions; however, ABMs require different types of empirical inputs, many of which have yet to be estimated for key North American waterfowl. Decisions about how JVs can best proceed with habitat conservation would benefit from the use of sensitivity analyses that could identify the empirical and theoretical uncertainties that have the greatest influence on efforts to estimate habitat carrying capacity. Development of ABMs at restricted, yet biologically relevant spatial scales, followed by comparisons of their outputs to those generated from more simplistic, deterministic models can provide a means of assessing degrees of dissimilarity in how alternative models describe desired landscape conditions for migrating and wintering waterfowl.

  20. Optimal behaviour can violate the principle of regularity.

    PubMed

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  1. Rocket Scientist for a Day: Investigating Alternatives for Chemical Propulsion

    ERIC Educational Resources Information Center

    Angelin, Marcus; Rahm, Martin; Gabrielsson, Erik; Gumaelius, Lena

    2012-01-01

    This laboratory experiment introduces rocket science from a chemistry perspective. The focus is set on chemical propulsion, including its environmental impact and future development. By combining lecture-based teaching with practical, theoretical, and computational exercises, the students get to evaluate different propellant alternatives. To…

  2. ACOUSTIC LINERS FOR TURBOFAN ENGINES

    NASA Technical Reports Server (NTRS)

    Minner, G. L.

    1994-01-01

    This program was developed to design acoustic liners for turbofan engines. This program combines results from theoretical models of wave alternation in acoustically treated passages with experimental data from full-scale fan noise suppressors. By including experimentally obtained information, the program accounts for real effects such as wall boundary layers, duct terminations, and sound modal structure. The program has its greatest use in generating a number of design specifications to be used for evaluation of trade-offs. The program combines theoretical and empirical data in designing annular acoustic liners. First an estimate of the noise output of the fan is made based on basic fan aerodynamic design variables. Then, using a target noise spectrum after alternation and the estimated fan noise spectrum, a design spectrum is calculated as their difference. Next, the design spectrum is combined with knowledge of acoustic liner performance and the liner design variables to specify the acoustic design. Details of the liner design are calculated by combining the required acoustic impedance with a mathematical model relating acoustic impedance to the physical structure of the liner. Input to the noise prediction part of the program consists of basic fan operating parameters, distance that the target spectrum is to be measured and the target spectrum. The liner design portion of the program requires the required alternation spectrum, desired values of length to height and several option selection parameters. Output from the noise prediction portion is a noise spectrum consisting of discrete tones and broadband noise. This may be used as input to the liner design portion of the program. The liner design portion of the program produces backing depths, open area ratios, and face plate thicknesses. This program is written in FORTRAN V and has been implemented in batch mode on a UNIVAC 1100 series computer with a central memory requirement of 12K (decimal) of 36 bit words.

  3. Magnetically multiplexed heating of single domain nanoparticles

    NASA Astrophysics Data System (ADS)

    Christiansen, M. G.; Senko, A. W.; Chen, R.; Romero, G.; Anikeeva, P.

    2014-05-01

    Selective hysteretic heating of multiple collocated types of single domain magnetic nanoparticles (SDMNPs) by alternating magnetic fields (AMFs) may offer a useful tool for biomedical applications. The possibility of "magnetothermal multiplexing" has not yet been realized, in part due to prevalent use of linear response theory to model SDMNP heating in AMFs. Dynamic hysteresis modeling suggests that specific driving conditions play an underappreciated role in determining optimal material selection strategies for high heat dissipation. Motivated by this observation, magnetothermal multiplexing is theoretically predicted and empirically demonstrated by selecting SDMNPs with properties that suggest optimal hysteretic heat dissipation at dissimilar AMF driving conditions. This form of multiplexing could effectively offer multiple channels for minimally invasive biological signaling applications.

  4. Ranking Theory and Conditional Reasoning.

    PubMed

    Skovgaard-Olsen, Niels

    2016-05-01

    Ranking theory is a formal epistemology that has been developed in over 600 pages in Spohn's recent book The Laws of Belief, which aims to provide a normative account of the dynamics of beliefs that presents an alternative to current probabilistic approaches. It has long been received in the AI community, but it has not yet found application in experimental psychology. The purpose of this paper is to derive clear, quantitative predictions by exploiting a parallel between ranking theory and a statistical model called logistic regression. This approach is illustrated by the development of a model for the conditional inference task using Spohn's (2013) ranking theoretic approach to conditionals. Copyright © 2015 Cognitive Science Society, Inc.

  5. It's the information!

    PubMed

    Ward, Ryan D; Gallistel, C R; Balsam, Peter D

    2013-05-01

    Learning in conditioning protocols has long been thought to depend on temporal contiguity between the conditioned stimulus and the unconditioned stimulus. This conceptualization has led to a preponderance of associative models of conditioning. We suggest that trial-based associative models that posit contiguity as the primary principle underlying learning are flawed, and provide a brief review of an alternative, information theoretic approach to conditioning. The information that a CS conveys about the timing of the next US can be derived from the temporal parameters of a conditioning protocol. According to this view, a CS will support conditioned responding if, and only if, it reduces uncertainty about the timing of the next US. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Using Latent Class Analysis to Model Temperament Types.

    PubMed

    Loken, Eric

    2004-10-01

    Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.

  7. Equivalent dynamic model of DEMES rotary joint

    NASA Astrophysics Data System (ADS)

    Zhao, Jianwen; Wang, Shu; Xing, Zhiguang; McCoul, David; Niu, Junyang; Huang, Bo; Liu, Liwu; Leng, Jinsong

    2016-07-01

    The dielectric elastomer minimum energy structure (DEMES) can realize large angular deformations by a small voltage-induced strain of the dielectric elastomer (DE), so it is a suitable candidate to make a rotary joint for a soft robot. Dynamic analysis is necessary for some applications, but the dynamic response of DEMESs is difficult to model because of the complicated morphology and viscoelasticity of the DE film. In this paper, a method composed of theoretical analysis and experimental measurement is presented to model the dynamic response of a DEMES rotary joint under an alternating voltage. Based on measurements of equivalent driving force and damping of the DEMES, the model can be derived. Some experiments were carried out to validate the equivalent dynamic model. The maximum angle error between model and experiment is greater than ten degrees, but it is acceptable to predict angular velocity of the DEMES, therefore, it can be applied in feedforward-feedback compound control.

  8. Testing take-the-best in new and changing environments.

    PubMed

    Lee, Michael D; Blanco, Gabrielle; Bo, Nikole

    2017-08-01

    Take-the-best is a decision-making strategy that chooses between alternatives, by searching the cues representing the alternatives in order of cue validity, and choosing the alternative with the first discriminating cue. Theoretical support for take-the-best comes from the "fast and frugal" approach to modeling cognition, which assumes decision-making strategies need to be fast to cope with a competitive world, and be simple to be robust to uncertainty and environmental change. We contribute to the empirical evaluation of take-the-best in two ways. First, we generate four new environments-involving bridge lengths, hamburger prices, theme park attendances, and US university rankings-supplementing the relatively limited number of naturally cue-based environments previously considered. We find that take-the-best is as accurate as rival decision strategies that use all of the available cues. Secondly, we develop 19 new data sets characterizing the change in cities and their populations in four countries. We find that take-the-best maintains its accuracy and limited search as the environments change, even if cue validities learned in one environment are used to make decisions in another. Once again, we find that take-the-best is as accurate as rival strategies that use all of the cues. We conclude that these new evaluations support the theoretical claims of the accuracy, frugality, and robustness for take-the-best, and that the new data sets provide a valuable resource for the more general study of the relationship between effective decision-making strategies and the environments in which they operate.

  9. Application of information-theoretic measures to quantitative analysis of immunofluorescent microscope imaging.

    PubMed

    Shutin, Dmitriy; Zlobinskaya, Olga

    2010-02-01

    The goal of this contribution is to apply model-based information-theoretic measures to the quantification of relative differences between immunofluorescent signals. Several models for approximating the empirical fluorescence intensity distributions are considered, namely Gaussian, Gamma, Beta, and kernel densities. As a distance measure the Hellinger distance and the Kullback-Leibler divergence are considered. For the Gaussian, Gamma, and Beta models the closed-form expressions for evaluating the distance as a function of the model parameters are obtained. The advantages of the proposed quantification framework as compared to simple mean-based approaches are analyzed with numerical simulations. Two biological experiments are also considered. The first is the functional analysis of the p8 subunit of the TFIIH complex responsible for a rare hereditary multi-system disorder--trichothiodystrophy group A (TTD-A). In the second experiment the proposed methods are applied to assess the UV-induced DNA lesion repair rate. A good agreement between our in vivo results and those obtained with an alternative in vitro measurement is established. We believe that the computational simplicity and the effectiveness of the proposed quantification procedure will make it very attractive for different analysis tasks in functional proteomics, as well as in high-content screening. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  10. Theoretical models for the combustion of alloyable materials

    NASA Astrophysics Data System (ADS)

    Armstrong, Robert

    1992-09-01

    The purpose of this work is to extend a theoretical model of layered (laminar) media for SHS combustion presented in an earlier article [1] to explore possible mechanisms for after-burning in SHS ( i.e., gasless) combustion. As before, our particular interest is how the microscopic geometry of the solid reactants is reflected in the combustion wave and in the reaction product. The model is constructed from alternating lamina of two pure reactants that interdiffuse exothermically to form a product. Here, the laminar model is extended to contain layers of differing thicknesses. Using asymptotic theory, it was found that under certain conditions, the combustion wave can become “detached,” and an initial thin flame propagates through the media, leaving a slower, thicker flame following behind ( i.e., afterburning). Thin laminae are consumed in the initial flame and are thick in the secondary. The thin flame has a width determined by the inverse of the activation energy of diffusion, as found previously. The width of the afterburning zone, however, is determined by the absolute time of diffusion for the thicker laminae. Naturally, when the laminae are all the same thickness, there is only one thin flame. The condition for the appearance of afterburning is found to be contingent on the square of the ratio of smallestto-largest thicknesses being considerably less than unity.

  11. Exploring alternative conceptions of teachers and informal educators about selected astronomy concepts

    NASA Astrophysics Data System (ADS)

    Rutherford, Lori B.

    The purpose of this study was to (1) identify alternative conceptions concerning astronomy in groups of formal and informal educators, (2) discover the origins of some of these conceptions and (3) explore how practicing teachers planned to address the need for conceptual change in their students. In response to the first question, a number of alternative conceptions were identified in formal educators, with more for teachers of prekindergarten through third grade than fourth through twelfth grade teachers, and very few alternative conceptions in the informal educators group. In regards to the second research question, a number of origins were indicated: logic, books, elementary school, high school, astronomy classes, self-study and observation. In response to the third question, various practicing teachers used computer programs and modeling in order to address some of the alternative conceptions they noticed in their students. These findings were supported by the literature and theoretical frameworks on which the study was based. The study addressed gaps in the literature concerning alternative conceptions and how they related to Ohio's Academic Content Standards along with nineteen other states. This study also addressed the need for a closer examination of informal educators and how they compare to formal educators in terms of having alternative conceptions. And finally, implications and recommendations were made for practicing educators, materials for practicing educators, teacher education, informal and formal education partnerships, standards modification, research methodology and areas of future research.

  12. ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION

    PubMed Central

    SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.

    2015-01-01

    Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814

  13. Constraints on single-field inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico

    2016-06-28

    Many alternatives to canonical slow-roll inflation have been proposed over the years, one of the main motivations being to have a model, capable of generating observable values of non-Gaussianity. In this work, we (re-)explore the physical implications of a great majority of such models within a single, effective field theory framework (including novel models with large non-Gaussianity discussed for the first time below). The constraints we apply — both theoretical and experimental — are found to be rather robust, determined to a great extent by just three parameters: the coefficients of the quadratic EFT operators (δN){sup 2} and δNδE, andmore » the slow-roll parameter ε. This allows to significantly limit the majority of single-field alternatives to canonical slow-roll inflation. While the existing data still leaves some room for most of the considered models, the situation would change dramatically if the current upper limit on the tensor-to-scalar ratio decreased down to r<10{sup −2}. Apart from inflationary models driven by plateau-like potentials, the single-field model that would have a chance of surviving this bound is the recently proposed slow-roll inflation with weakly-broken galileon symmetry. In contrast to canonical slow-roll inflation, the latter model can support r<10{sup −2} even if driven by a convex potential, as well as generate observable values for the amplitude of non-Gaussianity.« less

  14. Suggestions for presenting the results of data analyses

    USGS Publications Warehouse

    Anderson, David R.; Link, William A.; Johnson, Douglas H.; Burnham, Kenneth P.

    2001-01-01

    We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentists methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management.

  15. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  16. Computational studies of Ras and PI3K

    NASA Technical Reports Server (NTRS)

    Ren, Lei; Cucinotta, Francis A.

    2004-01-01

    Until recently, experimental techniques in molecular cell biology have been the primary means to investigate biological risk upon space radiation. However, computational modeling provides an alternative theoretical approach, which utilizes various computational tools to simulate proteins, nucleotides, and their interactions. In this study, we are focused on using molecular mechanics (MM) and molecular dynamics (MD) to study the mechanism of protein-protein binding and to estimate the binding free energy between proteins. Ras is a key element in a variety of cell processes, and its activation of phosphoinositide 3-kinase (PI3K) is important for survival of transformed cells. Different computational approaches for this particular study are presented to calculate the solvation energies and binding free energies of H-Ras and PI3K. The goal of this study is to establish computational methods to investigate the roles of different proteins played in the cellular responses to space radiation, including modification of protein function through gene mutation, and to support the studies in molecular cell biology and theoretical kinetics models for our risk assessment project.

  17. Dynamic Effects of Performance-Avoidance Goal Orientation on Student Achievement in Language and Mathematics.

    PubMed

    Stamovlasis, Dimitrios; Gonida, Sofia-Eleftheria N

    2018-07-01

    The present study used achievement goal theory (AGT) as a theoretical framework and examined the role of mastery and performance goals, both performance-approach and performance-avoidance, on school achieve-ment within the nonlinear dynamical systems (NDS) perspective. A series of cusp catastrophe models were applied on students' achievement in a number of school subjects, such as mathematics and language for elementary school and algebra, geometry, ancient and modern Greek language for high school, using achievement goal orientations as control variables. The participants (N=224) were students attending fifth and eighth grade (aged 11 and 14, respectively) in public schools located in northern Greece. Cusp analysis based on the probability density function was carried out by two procedures, the maximum likelihood and the least squares. The results showed that performance-approach goals had no linear effect on achievement, while the cusp models implementing mastery goals as the asymmetry factor and performance-avoidance as the bifurcation, proved superior to their linear alternatives. The results of the study based on NDS support the multiple goal perspective within AGT. Theoretical issues, educational implications and future directions are discussed.

  18. Modeling Remineralization of Desalinated Water by Micronized Calcite Dissolution.

    PubMed

    Hasson, David; Fine, Larissa; Sagiv, Abraham; Semiat, Raphael; Shemer, Hilla

    2017-11-07

    A widely used process for remineralization of desalinated water consists of dissolution of calcite particles by flow of acidified desalinated water through a bed packed with millimeter-size calcite particles. An alternative process consists of calcite dissolution by slurry flow of micron-size calcite particles with acidified desalinated water. The objective of this investigation is to provide theoretical models enabling design of remineralization by calcite slurry dissolution with carbonic and sulfuric acids. Extensive experimental results are presented displaying the effects of acid concentration, slurry feed concentration, and dissolution contact time. The experimental data are shown to be in agreement within less than 10% with theoretical predictions based on the simplifying assumption that the slurry consists of uniform particles represented by the surface mean diameter of the powder. Agreement between theory and experiment is improved by 1-8% by taking into account the powder size distribution. Apart from the practical value of this work in providing a hitherto lacking design tool for a novel technology. The paper has the merit of being among the very few publications providing experimental confirmation to the theory describing reaction kinetics in a segregated flow system.

  19. Economic Analysis in the Pacific Northwest Land Resources Project: Theoretical Considerations and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Morse, D. R. A.; Sahlberg, J. T.

    1977-01-01

    The Pacific Northwest Land Resources Inventory Demonstration Project i s an a ttempt to combine a whole spectrum of heterogeneous geographic, institutional and applications elements in a synergistic approach to the evaluation of remote sensing techniques. This diversity is the prime motivating factor behind a theoretical investigation of alternative economic analysis procedures. For a multitude of reasons--simplicity, ease of understanding, financial constraints and credibility, among others--cost-effectiveness emerges as the most practical tool for conducting such evaluation determinatIons in the Pacific Northwest. Preliminary findings in two water resource application areas suggest, in conformity with most published studies, that Lands at-aided data collection methods enjoy substantial cost advantages over alternative techniques. The pntential for sensitivity analysis based on cost/accuracy tradeoffs is considered on a theoretical plane in the absence of current accuracy figures concerning the Landsat-aided approach.

  20. Students Teach Students: Alternative Teaching in Greek Secondary Education

    ERIC Educational Resources Information Center

    Theodoropoulos, Anastasios; Antoniou, Angeliki; Lepouras, George

    2016-01-01

    The students of a Greek junior high school collaborated to prepare the teaching material of a theoretical Computer Science (CS) course and then shared their understanding with other students. This study investigates two alternative teaching methods (collaborative learning and peer tutoring) and compares the learning results to the traditional…

  1. Ratio Variables in Aggregate Data Analysis: Their Uses, Problems, and Alternatives.

    ERIC Educational Resources Information Center

    Bollen, Kenneth A.; Ward, Sally

    1979-01-01

    Three different uses of ratio variables in aggregate data analysis are discussed: (1) as measures of theoretical concepts, (2) as a means to control an extraneous factor, and (3) as a correction for heteroscedasticity. Alternatives to ratios for each of these cases are discussed and evaluated. (Author/JKS)

  2. The Importance of Proving the Null

    ERIC Educational Resources Information Center

    Gallistel, C. R.

    2009-01-01

    Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is…

  3. Corporal Punishment and Alternatives in the Schools: An Overview of Theoretical and Practical Issues.

    ERIC Educational Resources Information Center

    Hyman, Irwin A.; And Others

    1978-01-01

    Many alternatives to the use of corporal punishment in the schools exist. Because the mass of survey data collected contraindicates the use of corporal punishment, the burden of proof of its effectiveness should be assumed by those who favor its use. (Author/WI)

  4. Integrating School and Workplace Learning in Canada: Principles and Practices of Alternation Education and Training.

    ERIC Educational Resources Information Center

    Schuetze, Hans G., Ed.; Sweet, Robert, Ed.

    This volume discusses "alternation," various combinations of classroom (organized, theoretical) knowledge and workplace (practical) learning in Canada intended to adequately prepare secondary and postsecondary graduates for work in the new economy. Following an introduction, "Integrating School and Workplace Learning in Canada: An…

  5. Elastic anisotropy of layered rocks: Ultrasonic measurements of plagioclase-biotite-muscovite (sillimanite) gneiss versus texture-based theoretical predictions (effective media modeling)

    NASA Astrophysics Data System (ADS)

    Ivankina, T. I.; Zel, I. Yu.; Lokajicek, T.; Kern, H.; Lobanov, K. V.; Zharikov, A. V.

    2017-08-01

    In this paper we present experimental and theoretical studies on a highly anisotropic layered rock sample characterized by alternating layers of biotite and muscovite (retrogressed from sillimanite) and plagioclase and quartz, respectively. We applied two different experimental methods to determine seismic anisotropy at pressures up to 400 MPa: (1) measurement of P- and S-wave phase velocities on a cube in three foliation-related orthogonal directions and (2) measurement of P-wave group velocities on a sphere in 132 directions The combination of the spatial distribution of P-wave velocities on the sphere (converted to phase velocities) with S-wave velocities of three orthogonal structural directions on the cube made it possible to calculate the bulk elastic moduli of the anisotropic rock sample. On the basis of the crystallographic preferred orientations (CPOs) of major minerals obtained by time-of-flight neutron diffraction, effective media modeling was performed using different inclusion methods and averaging procedures. The implementation of a nonlinear approximation of the P-wave velocity-pressure relation was applied to estimate the mineral matrix properties and the orientation distribution of microcracks. Comparison of theoretical calculations of elastic properties of the mineral matrix with those derived from the nonlinear approximation showed discrepancies in elastic moduli and P-wave velocities of about 10%. The observed discrepancies between the effective media modeling and ultrasonic velocity data are a consequence of the inhomogeneous structure of the sample and inability to perform long-wave approximation. Furthermore, small differences between elastic moduli predicted by the different theoretical models, including specific fabric characteristics such as crystallographic texture, grain shape and layering were observed. It is shown that the bulk elastic anisotropy of the sample is basically controlled by the CPO of biotite and muscovite and their volume proportions in the layers dominated by phyllosilicate minerals.

  6. Slingram EMI prospection: Are vertical orientated devices a suitable solution in archaeological and pedological prospection?

    NASA Astrophysics Data System (ADS)

    Thiesson, Julien; Rousselle, Gabrielle; Simon, François Xavier; Tabbagh, Alain

    2011-12-01

    Electromagnetic induction (EMI) is one of the geophysical techniques widely used in soil studies, the slingram devices being held horizontally over the soil surface, i.e. with the coils located at the same height above the ground surface. Our study aims assessing the abilities of slingram devices when held vertically. 1D and 3D modelling have been achieved in order to compare the theoretical responses of vertical devices to the horizontal ones. Some comparative surveys were also undertaken in archaeological contexts to confirm the reliability of theoretical conclusions. Both approaches show that vertical slingram devices are suitable for survey and can constitute an alternative to the usual horizontal orientation. We give a table in Appendix A which contains the calibration coefficient allowing transforming of the values given by some of commercially available devices which would be advantageous to use in vertical orientation

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riande, E.; Guzman, J.; Roman, J.S.

    The dipole moments of poly (thiodiethylene glycol terephthalate) chains were determined as a function of temperature by means of dielectric constant measurements in dioxane. The experimental results were found to be in fair agreement with theoretical results based on a rotational isomeric state model in which the required conformational energies were obtained from previous configurational analysis on poly(ethylene terephthalate), poly(diethylene glycol terephthalate) and poly(thiodiethylene glycol). Since poly(thiodiethylene glycol terephthalate) can be considered an alternating copolymer of ethylene terephthalate and thioethylene units, its configuration-dependent properties were compared with those of poly(ethylene terephthalate) and poly(ethylene sulfide). It was found the flexibility ofmore » the copolymer, as expressed by the partition function, intermediate to that of its parent homopolymers. The theoretical results also indicate that the dimensions of poly(thiodiethylene glycol) are similar to those of poly(ethylene terephthalate) while its dipole moment ratio resembles that of poly(ethylene sulfide).« less

  8. From the big bang to the brain.

    PubMed

    Boliek, C A; Lohmeier, H

    1999-01-01

    Current research on the capacities of the infant has lead to a better understanding of developmental processes underlying cognition and motor skill acquisition. ASHA's Eighth Annual Research Symposium on Infant-Toddler Development, in November 1998, included a presentation on developmental cognitive science by Dr. Andrew Meltzoff and a presentation on motor skill acquisition by Dr. Esther Thelen. The theoretical constructs and data presented served to broaden our current perspectives on infant abilities. The data reported by Meltzoff and Thelen challenged several long-standing theories of infant cognition and motor development. Alternative theoretical models were used to describe skill acquisition during the first several years of life. Our response will include a brief summary of each investigator's presentation, discuss their findings with respect to research in the area of infant speech physiology and production, and provide possible future directions and challenges for individuals conducting developmental research.

  9. Cosmological moduli and the post-inflationary universe: A critical review

    NASA Astrophysics Data System (ADS)

    Kane, Gordon; Sinha, Kuver; Watson, Scott

    2015-06-01

    We critically review the role of cosmological moduli in determining the post-inflationary history of the universe. Moduli are ubiquitous in string and M-theory constructions of beyond the Standard Model physics, where they parametrize the geometry of the compactification manifold. For those with masses determined by supersymmetry (SUSY) breaking this leads to their eventual decay slightly before Big Bang nucleosynthesis (BBN) (without spoiling its predictions). This results in a matter dominated phase shortly after inflation ends, which can influence baryon and dark matter genesis, as well as observations of the cosmic microwave background (CMB) and the growth of large-scale structure. Given progress within fundamental theory, and guidance from dark matter and collider experiments, nonthermal histories have emerged as a robust and theoretically well-motivated alternative to a strictly thermal one. We review this approach to the early universe and discuss both the theoretical challenges and the observational implications.

  10. Positron-Electron Annihilation Process in (2,2)-Difluoropropane Molecule

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Ma, Xiao-Guang; Zhu, Ying-Hao

    2016-04-01

    The positron-electron annihilation process in (2,2)-difluoropropane molecule and the corresponding gamma-ray spectra are studied by quantum chemistry method. The positrophilic electrons in (2,2)-difluoropropane molecule are found for the first time. The theoretical predictions show that the outermost 2s electrons of fluoride atoms play an important role in positron-electron annihilation process of (2,2)-difiuoropropane. In the present scheme, the correlation coefficient between the theoretical gamma-ray spectra and the experiments can be 99%. The present study gives an alternative annihilation model for positron-electron pair in larger molecules. Supported by the National Natural Science Foundation of China under Grant No. 11347011 and the Natural Science Foundation Project of Shandong Province under Grant No. ZR2011AM010 and 2014 Technology Innovation Fund of Ludong University under Grant Nos. 1d151007 and ld15l016

  11. Integrated Media: Toward a Theoretical Framework for Utilizing Their Potential.

    ERIC Educational Resources Information Center

    Journal of Special Education Technology, 1993

    1993-01-01

    This article discusses how current theories of learning and memory can guide the application of integrated media (IM) to embellish a standard curriculum; considers theoretical reasons for "breaking the mold"; and offers examples of IM-based alternatives to curricula in the areas of adult literacy, language arts, social studies, language skills,…

  12. Optimization Techniques for Analysis of Biological and Social Networks

    DTIC Science & Technology

    2012-03-28

    analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational

  13. Content Based Image Retrieval and Information Theory: A General Approach.

    ERIC Educational Resources Information Center

    Zachary, John; Iyengar, S. S.; Barhen, Jacob

    2001-01-01

    Proposes an alternative real valued representation of color based on the information theoretic concept of entropy. A theoretical presentation of image entropy is accompanied by a practical description of the merits and limitations of image entropy compared to color histograms. Results suggest that image entropy is a promising approach to image…

  14. Memory in Aristotle and Some Neo-Aristotelians.

    ERIC Educational Resources Information Center

    Dulin, John T.

    The purpose of this paper is to present a theoretical tradition which may broaden the scope and perhaps suggest alternate avenues of investigation of the function which we call "memory." As psychology developed during the past century, the area of memory has been strongly influenced on the theoretical level by the thinking of the British…

  15. A critical review on liquid-gas mass transfer models for estimating gaseous emissions from passive liquid surfaces in wastewater treatment plants.

    PubMed

    Prata, Ademir A; Santos, Jane M; Timchenko, Victoria; Stuetz, Richard M

    2018-03-01

    Emission models are useful tools for the study and management of atmospheric emissions from passive liquid surfaces in wastewater treatment plants (WWTPs), which are potential sources of odour nuisance and other environmental impacts. In this work, different theoretical and empirical models for the gas-side (k G ) and liquid-side (k L ) mass transfer coefficients in passive surfaces in WWTPs were critically reviewed and evaluated against experimental data. Wind forcing and the development of the wind-wave field, especially the occurrence of microscale wave breaking, were identified as the most important physical factors affecting mass transfer in these situations. Two approaches performed well in describing the available data for k G for water vapour. One is an empirical correlation whilst the other consists of theoretical models based on the description of the inner part of the turbulent boundary layer over a smooth flat plate. We also fit to the experimental data set a new, alternate equation for k G , whose performance was comparable to existing ones. However, these three approaches do not agree with each other in the whole range of Schmidt numbers typical for compounds found in emissions from WWTPs. As to k L , no model was able to satisfactorily explain the behaviour and the scatter observed in the whole experimental data set. Excluding two suspected biased sources, the WATER9 (US EPA, 1994. Air Emission Models for Waste and Wastewater. North Carolina, USA. EPA-453/R-94-080A) approach produced the best results among the most commonly used k L models, although still with considerably high relative errors. For this same sub-set, we propose a new, alternate approach for estimating k L , which resulted in improved performance, particularly for longer fetches. Two main gaps were found in the literature, the understanding of the evolution of the mass transfer boundary layer over liquid surfaces, and the behaviour of k L for larger fetches, especially in the range from 40 to 60 m. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Mean-field and linear regime approach to magnetic hyperthermia of core-shell nanoparticles: can tiny nanostructures fight cancer?

    NASA Astrophysics Data System (ADS)

    Carrião, Marcus S.; Bakuzis, Andris F.

    2016-04-01

    The phenomenon of heat dissipation by magnetic materials interacting with an alternating magnetic field, known as magnetic hyperthermia, is an emergent and promising therapy for many diseases, mainly cancer. Here, a magnetic hyperthermia model for core-shell nanoparticles is developed. The theoretical calculation, different from previous models, highlights the importance of heterogeneity by identifying the role of surface and core spins on nanoparticle heat generation. We found that the most efficient nanoparticles should be obtained by selecting materials to reduce the surface to core damping factor ratio, increasing the interface exchange parameter and tuning the surface to core anisotropy ratio for each material combination. From our results we propose a novel heat-based hyperthermia strategy with the focus on improving the heating efficiency of small sized nanoparticles instead of larger ones. This approach might have important implications for cancer treatment and could help improving clinical efficacy.The phenomenon of heat dissipation by magnetic materials interacting with an alternating magnetic field, known as magnetic hyperthermia, is an emergent and promising therapy for many diseases, mainly cancer. Here, a magnetic hyperthermia model for core-shell nanoparticles is developed. The theoretical calculation, different from previous models, highlights the importance of heterogeneity by identifying the role of surface and core spins on nanoparticle heat generation. We found that the most efficient nanoparticles should be obtained by selecting materials to reduce the surface to core damping factor ratio, increasing the interface exchange parameter and tuning the surface to core anisotropy ratio for each material combination. From our results we propose a novel heat-based hyperthermia strategy with the focus on improving the heating efficiency of small sized nanoparticles instead of larger ones. This approach might have important implications for cancer treatment and could help improving clinical efficacy. Electronic supplementary information (ESI) available: Unit cells per region calculation; core-shell Hamiltonian; magnetisation description functions; energy argument of Brillouin function; polydisperse models; details of experimental procedure; LRT versus core-shell model; model calculation software; and shell thickness study. See DOI: 10.1039/C5NR09093H

  17. Linear theory of plasma Čerenkov masers

    NASA Astrophysics Data System (ADS)

    Birau, M.

    1996-11-01

    A different theoretical model of Čerenkov instability in the linear amplification regime of plasma Čerenkov masers is developed. The model assumes a cold relativistic annular electron beam propagating through a column of cold dense plasma, the two bodies being immersed in an infinite magnetic guiding field inside a perfect cylindrical waveguide. In order to simplify the calculations, a radial rectangular distribution of plasma and beam density is assumed and only azimuthal symmetric modes are under investigation. The model's difference consists of taking into account the whole plasma and beam electromagnetic structures in the interpretation of the Čerenkov instability. This model leads to alternative results such as the possibility of emission at several frequencies. In addition, the electric field is calculated taking into account its radial phase dependence, so that a map of the field in the interaction region can be presented.

  18. [Reason and emotion: integration of cognitive-behavioural and experiential interventions in the treatment of long evolution food disorders].

    PubMed

    Vilariño Besteiro, M P; Pérez Franco, C; Gallego Morales, L; Calvo Sagardoy, R; García de Lorenzo, A

    2009-01-01

    This paper intends to show the combination of therapeutical strategies in the treatment of long evolution food disorders. This fashion of work entitled "Modelo Santa Cristina" is based on several theoretical paradigms: Enabling Model, Action Control Model, Change Process Transtheoretical Model and Cognitive-Behavioural Model (Cognitive Restructuring and Learning Theories). Furthermore, Gestalt, Systemic and Psychodrama Orientation Techniques. The purpose of the treatment is both the normalization of food patterns and the increase in self-knowledge, self-acceptance and self-efficacy of patients. The exploration of ambivalence to change, the discovery of the functions of symptoms and the search for alternative behaviours, the normalization of food patterns, bodily image, cognitive restructuring, decision taking, communication skills and elaboration of traumatic experiences are among the main areas of intervention.

  19. Fast solver for large scale eddy current non-destructive evaluation problems

    NASA Astrophysics Data System (ADS)

    Lei, Naiguang

    Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.

  20. Life-course pathways to psychological distress: a cohort study

    PubMed Central

    von Stumm, Sophie; Deary, Ian J; Hagger-Johnson, Gareth

    2013-01-01

    Objectives Early life factors, like intelligence and socioeconomic status (SES), are associated with health outcomes in adulthood. Fitting comprehensive life-course models, we tested (1) the effect of childhood intelligence and SES, education and adulthood SES on psychological distress at midlife, and (2) compared alternative measurement specifications (reflective and formative) of SES. Design Prospective cohort study (the Aberdeen Children of the 1950s). Setting Aberdeen, Scotland. Participants 12 500 live-births (6282 boys) between 1950 and 1956, who were followed up in the years 2001–2003 at age 46–51 with a postal questionnaire achieving a response rate of 64% (7183). Outcome measures Psychological distress at age 46–51 (questionnaire). Results Childhood intelligence and SES and education had indirect effects on psychological distress at midlife, mediated by adult SES. Adult SES was the only variable to have a significant direct effect on psychological distress at midlife; the effect was stronger in men than in women. Alternative measurement specifications of SES (reflective and formative) resulted in greatly different model parameters and fits. Conclusions Even though formative operationalisations of SES are theoretically appropriate, SES is better specified as reflective than as a formative latent variable in the context of life-course modelling. PMID:23667162

  1. Life-course pathways to psychological distress: a cohort study.

    PubMed

    von Stumm, Sophie; Deary, Ian J; Hagger-Johnson, Gareth

    2013-05-09

    Early life factors, like intelligence and socioeconomic status (SES), are associated with health outcomes in adulthood. Fitting comprehensive life-course models, we tested (1) the effect of childhood intelligence and SES, education and adulthood SES on psychological distress at midlife, and (2) compared alternative measurement specifications (reflective and formative) of SES. Prospective cohort study (the Aberdeen Children of the 1950s). Aberdeen, Scotland. 12 500 live-births (6282 boys) between 1950 and 1956, who were followed up in the years 2001-2003 at age 46-51 with a postal questionnaire achieving a response rate of 64% (7183). Psychological distress at age 46-51 (questionnaire). Childhood intelligence and SES and education had indirect effects on psychological distress at midlife, mediated by adult SES. Adult SES was the only variable to have a significant direct effect on psychological distress at midlife; the effect was stronger in men than in women. Alternative measurement specifications of SES (reflective and formative) resulted in greatly different model parameters and fits. Even though formative operationalisations of SES are theoretically appropriate, SES is better specified as reflective than as a formative latent variable in the context of life-course modelling.

  2. Series Bosch System Development

    NASA Technical Reports Server (NTRS)

    Abney, Morgan B.; Evans, Christopher; Mansell, Matt; Swickrath, Michael

    2012-01-01

    State-of-the-art (SOA) carbon dioxide (CO2) reduction technology for the International Space Station produces methane as a byproduct. This methane is subsequently vented overboard. The associated loss of hydrogen ultimately reduces the mass of oxygen that can be recovered from CO2 in a closed-loop life support system. As an alternative to SOA CO2 reduction technology, NASA is exploring a Series-Bosch system capable of reducing CO2 with hydrogen to form water and solid carbon. This results in 100% theoretical recovery of oxygen from metabolic CO2. In the past, Bosch-based technology did not trade favorably against SOA technology due to a high power demand, low reaction efficiencies, concerns with carbon containment, and large resupply requirements necessary to replace expended catalyst cartridges. An alternative approach to Bosch technology, labeled "Series-Bosch," employs a new system design with optimized multi-stage reactors and a membrane-based separation and recycle capability. Multi-physics modeling of the first stage reactor, along with chemical process modeling of the integrated system, has resulted in a design with potential to trade significantly better than previous Bosch technology. The modeling process and resulting system architecture selection are discussed.

  3. Docking Simulation of the Binding Interactions of Saxitoxin Analogs Produced by the Marine Dinoflagellate Gymnodinium catenatum to the Voltage-Gated Sodium Channel Nav1.4.

    PubMed

    Durán-Riveroll, Lorena M; Cembella, Allan D; Band-Schmidt, Christine J; Bustillos-Guzmán, José J; Correa-Basurto, José

    2016-05-06

    Saxitoxin (STX) and its analogs are paralytic alkaloid neurotoxins that block the voltage-gated sodium channel pore (Nav), impeding passage of Na⁺ ions into the intracellular space, and thereby preventing the action potential in the peripheral nervous system and skeletal muscle. The marine dinoflagellate Gymnodinium catenatum produces an array of such toxins, including the recently discovered benzoyl analogs, for which the mammalian toxicities are essentially unknown. We subjected STX and its analogs to a theoretical docking simulation based upon two alternative tri-dimensional models of the Nav1.4 to find a relationship between the binding properties and the known mammalian toxicity of selected STX analogs. We inferred hypothetical toxicities for the benzoyl analogs from the modeled values. We demonstrate that these toxins exhibit different binding modes with similar free binding energies and that these alternative binding modes are equally probable. We propose that the principal binding that governs ligand recognition is mediated by electrostatic interactions. Our simulation constitutes the first in silico modeling study on benzoyl-type paralytic toxins and provides an approach towards a better understanding of the mode of action of STX and its analogs.

  4. Docking Simulation of the Binding Interactions of Saxitoxin Analogs Produced by the Marine Dinoflagellate Gymnodinium catenatum to the Voltage-Gated Sodium Channel Nav1.4

    PubMed Central

    Durán-Riveroll, Lorena M.; Cembella, Allan D.; Band-Schmidt, Christine J.; Bustillos-Guzmán, José J.; Correa-Basurto, José

    2016-01-01

    Saxitoxin (STX) and its analogs are paralytic alkaloid neurotoxins that block the voltage-gated sodium channel pore (Nav), impeding passage of Na+ ions into the intracellular space, and thereby preventing the action potential in the peripheral nervous system and skeletal muscle. The marine dinoflagellate Gymnodinium catenatum produces an array of such toxins, including the recently discovered benzoyl analogs, for which the mammalian toxicities are essentially unknown. We subjected STX and its analogs to a theoretical docking simulation based upon two alternative tri-dimensional models of the Nav1.4 to find a relationship between the binding properties and the known mammalian toxicity of selected STX analogs. We inferred hypothetical toxicities for the benzoyl analogs from the modeled values. We demonstrate that these toxins exhibit different binding modes with similar free binding energies and that these alternative binding modes are equally probable. We propose that the principal binding that governs ligand recognition is mediated by electrostatic interactions. Our simulation constitutes the first in silico modeling study on benzoyl-type paralytic toxins and provides an approach towards a better understanding of the mode of action of STX and its analogs. PMID:27164145

  5. Psychometric properties of the Spanish Burnout Inventory among staff nurses.

    PubMed

    Gil-Monte, P R; Manzano-García, G

    2015-12-01

    The burnout syndrome contributes to the deterioration in the quality of personal life as well as lower quality practice in healthcare personnel. Researchers have been concerned about the psychometric limitations of some previous questionnaires designed to evaluate burnout. The Spanish Burnout Inventory was developed to address the problems associated with other instruments, but it has not yet been validated in staff nurses. This study provides evidence that the Spanish Burnout Inventory has adequate psychometric properties to estimate burnout in staff nurses. The Spanish Burnout Inventory offers a theoretical proposal to explain the different components of burnout. The Spanish Burnout Inventory provides researchers and practitioners with an expanded conceptualization of the burnout syndrome, which can facilitate the diagnosis and treatment of nursing professionals. Researchers have been concerned about the psychometric limitations of the some previous questionnaires designed to evaluate burnout. To address these problems associated with previous instruments, the Spanish Burnout Inventory (SBI) was developed. The instrument has not yet been validated in staff nurses. The purpose of this paper was to evaluate the psychometric properties of the SBI. The sample consisted of 720 staff nurses from two Spanish general hospitals. The instrument is composed of 20 items distributed in four dimensions: Enthusiasm towards the job (five items), Psychological exhaustion (four items), Indolence (six items) and Guilt (five items). Data were subjected to confirmatory factor analysis. To assess the factorial validity of the SBI, four alternative models were tested. Results show that the four-factor model of the SBI has adequate psychometric properties for the study of burnout in staff nurses. This model fitted the data better than the alternative models. The study provides evidence of the adequate psychometric properties of a measure to evaluate burnout in nursing professionals. The SBI proposes a theoretical explanation for the different types of burnout, facilitating the diagnosis and treatment of staff nurses. © 2015 John Wiley & Sons Ltd.

  6. An equilibrium-point model for fast, single-joint movement: I. Emergence of strategy-dependent EMG patterns.

    PubMed

    Latash, M L; Gottlieb, G L

    1991-09-01

    We describe a model for the regulation of fast, single-joint movements, based on the equilibrium-point hypothesis. Limb movement follows constant rate shifts of independently regulated neuromuscular variables. The independently regulated variables are tentatively identified as thresholds of a length sensitive reflex for each of the participating muscles. We use the model to predict EMG patterns associated with changes in the conditions of movement execution, specifically, changes in movement times, velocities, amplitudes, and moments of limb inertia. The approach provides a theoretical neural framework for the dual-strategy hypothesis, which considers certain movements to be results of one of two basic, speed-sensitive or speed-insensitive strategies. This model is advanced as an alternative to pattern-imposing models based on explicit regulation of timing and amplitudes of signals that are explicitly manifest in the EMG patterns.

  7. Indicators of ecosystem function identify alternate states in the sagebrush steppe.

    PubMed

    Kachergis, Emily; Rocca, Monique E; Fernandez-Gimenez, Maria E

    2011-10-01

    Models of ecosystem change that incorporate nonlinear dynamics and thresholds, such as state-and-transition models (STMs), are increasingly popular tools for land management decision-making. However, few models are based on systematic collection and documentation of ecological data, and of these, most rely solely on structural indicators (species composition) to identify states and transitions. As STMs are adopted as an assessment framework throughout the United States, finding effective and efficient ways to create data-driven models that integrate ecosystem function and structure is vital. This study aims to (1) evaluate the utility of functional indicators (indicators of rangeland health, IRH) as proxies for more difficult ecosystem function measurements and (2) create a data-driven STM for the sagebrush steppe of Colorado, USA, that incorporates both ecosystem structure and function. We sampled soils, plant communities, and IRH at 41 plots with similar clayey soils but different site histories to identify potential states and infer the effects of management practices and disturbances on transitions. We found that many IRH were correlated with quantitative measures of functional indicators, suggesting that the IRH can be used to approximate ecosystem function. In addition to a reference state that functions as expected for this soil type, we identified four biotically and functionally distinct potential states, consistent with the theoretical concept of alternate states. Three potential states were related to management practices (chemical and mechanical shrub treatments and seeding history) while one was related only to ecosystem processes (erosion). IRH and potential states were also related to environmental variation (slope, soil texture), suggesting that there are environmental factors within areas with similar soils that affect ecosystem dynamics and should be noted within STMs. Our approach generated an objective, data-driven model of ecosystem dynamics for rangeland management. Our findings suggest that the IRH approximate ecosystem processes and can distinguish between alternate states and communities and identify transitions when building data-driven STMs. Functional indicators are a simple, efficient way to create data-driven models that are consistent with alternate state theory. Managers can use them to improve current model-building methods and thus apply state-and-transition models more broadly for land management decision-making.

  8. Turbulent particle transport in streams: can exponential settling be reconciled with fluid mechanics?

    PubMed

    McNair, James N; Newbold, J Denis

    2012-05-07

    Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Towards a Universal Model of Reading

    PubMed Central

    Frost, Ram

    2013-01-01

    In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding, have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter-order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present paper discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is not a general property of the cognitive system, neither it is a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed. PMID:22929057

  10. Towards a universal model of reading.

    PubMed

    Frost, Ram

    2012-10-01

    In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present article discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is neither a general property of the cognitive system nor a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed.

  11. What lies behind crop decisions?Coming to terms with revealing farmers' preferences

    NASA Astrophysics Data System (ADS)

    Gomez, C.; Gutierrez, C.; Pulido-Velazquez, M.; López Nicolás, A.

    2016-12-01

    The paper offers a fully-fledged applied revealed preference methodology to screen and represent farmers' choices as the solution of an optimal program involving trade-offs among the alternative welfare outcomes of crop decisions such as profits, income security and management easiness. The recursive two-stage method is proposed as an alternative to cope with the methodological problems inherent to common practice positive mathematical program methodologies (PMP). Differently from PMP, in the model proposed in this paper, the non-linear costs that are required for both calibration and smooth adjustment are not at odds with the assumptions of linear Leontief technologies and fixed crop prices and input costs. The method frees the model from ad-hoc assumptions about costs and then recovers the potential of economic analysis as a means to understand the rationale behind observed and forecasted farmers' decisions and then to enhance the potential of the model to support policy making in relevant domains such as agricultural policy, water management, risk management and climate change adaptation. After the introduction, where the methodological drawbacks and challenges are set up, section two presents the theoretical model, section three develops its empirical application and presents its implementation to a Spanish irrigation district and finally section four concludes and makes suggestions for further research.

  12. Critical Race Theory and Research on Asian Americans and Pacific Islanders in Higher Education

    ERIC Educational Resources Information Center

    Teranishi, Robert T.; Behringer, Laurie B.; Grey, Emily A.; Parker, Tara L.

    2009-01-01

    In this article, the authors offer critical race theory (CRT) as an alternative theoretical perspective that permits the examination and transcendence of conceptual blockages, while simultaneously offering alternative perspectives on higher education policy and practice and the Asian American and Pacific Islander (AAPI) student population. The…

  13. A Critique of Schema Theory in Reading and a Dual Coding Alternative (Commentary).

    ERIC Educational Resources Information Center

    Sadoski, Mark; And Others

    1991-01-01

    Evaluates schema theory and presents dual coding theory as a theoretical alternative. Argues that schema theory is encumbered by lack of a consistent definition, its roots in idealist epistemology, and mixed empirical support. Argues that results of many empirical studies used to demonstrate the existence of schemata are more consistently…

  14. The Problems of "Competence" and Alternatives from the Scandinavian Perspective of "Bildung"

    ERIC Educational Resources Information Center

    Willbergh, Ilmi

    2015-01-01

    The paper aims to show how competence as an educational concept for the 21st century is struggling with theoretical problems for which the concept of "Bildung" in the European tradition can offer alternatives, and to discuss the possibility of developing a sustainable educational concept from the perspectives of competence and…

  15. Making sense, making good, or making meaning? Cognitive distortions as targets of change in offender treatment.

    PubMed

    Friestad, Christine

    2012-05-01

    Most structured sex-offender programs are based on a cognitive-behavioural model of behaviour change. Within this overarching theoretical paradigm, extensive use of cognitive distortions is seen as a central core symptom among sex offenders. However, the literature on cognitive distortions lacks a clear and consistent definition of the term. It is unclear whether cognitive distortions are consciously employed excuses or unconscious processes serving to protect the offender from feelings of guilt or shame. In this article, the dominant cognitive-behavioural interpretation of cognitive distortions is contrasted with two alternative interpretations. One is based on an attributional perspective and the notion of attributional biases. The other explanation is based on a narrative approach focusing on the action elements of cognitive distortions, that is, as something people do rather than something they have. Clinical implications of these alternative conceptualizations are discussed and illustrated throughout by a case example.

  16. Alternative route to charge density wave formation in multiband systems

    PubMed Central

    Eiter, Hans-Martin; Lavagnini, Michela; Hackl, Rudi; Nowadnick, Elizabeth A.; Kemper, Alexander F.; Devereaux, Thomas P.; Chu, Jiun-Haw; Analytis, James G.; Fisher, Ian R.; Degiorgi, Leonardo

    2013-01-01

    Charge and spin density waves, periodic modulations of the electron, and magnetization densities, respectively, are among the most abundant and nontrivial low-temperature ordered phases in condensed matter. The ordering direction is widely believed to result from the Fermi surface topology. However, several recent studies indicate that this common view needs to be supplemented. Here, we show how an enhanced electron–lattice interaction can contribute to or even determine the selection of the ordering vector in the model charge density wave system ErTe3. Our joint experimental and theoretical study allows us to establish a relation between the selection rules of the electronic light scattering spectra and the enhanced electron–phonon coupling in the vicinity of band degeneracy points. This alternative proposal for charge density wave formation may be of general relevance for driving phase transitions into other broken-symmetry ground states, particularly in multiband systems, such as the iron-based superconductors. PMID:23248317

  17. Alternative route to charge density wave formation in multiband systems.

    PubMed

    Eiter, Hans-Martin; Lavagnini, Michela; Hackl, Rudi; Nowadnick, Elizabeth A; Kemper, Alexander F; Devereaux, Thomas P; Chu, Jiun-Haw; Analytis, James G; Fisher, Ian R; Degiorgi, Leonardo

    2013-01-02

    Charge and spin density waves, periodic modulations of the electron, and magnetization densities, respectively, are among the most abundant and nontrivial low-temperature ordered phases in condensed matter. The ordering direction is widely believed to result from the Fermi surface topology. However, several recent studies indicate that this common view needs to be supplemented. Here, we show how an enhanced electron-lattice interaction can contribute to or even determine the selection of the ordering vector in the model charge density wave system ErTe(3). Our joint experimental and theoretical study allows us to establish a relation between the selection rules of the electronic light scattering spectra and the enhanced electron-phonon coupling in the vicinity of band degeneracy points. This alternative proposal for charge density wave formation may be of general relevance for driving phase transitions into other broken-symmetry ground states, particularly in multiband systems, such as the iron-based superconductors.

  18. An Exploration of E-Learning Benefits for Saudi Arabia: Toward Policy Reform

    ERIC Educational Resources Information Center

    Alrashidi, Abdulaziz

    2013-01-01

    Purpose: The purpose of this study was to examine policies and solutions addressing (a) improving education for citizens of the Kingdom of Saudi Arabia and (b) providing alternative instructional delivery methods, including e-learning for those living in remote areas. Theoretical Framework: The theoretical framework of this study was based on the…

  19. Experimental and theoretical investigations concerning a frequency filter behavior of the human retina regarding electric pulse currents. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Meier-Koll, A.

    1979-01-01

    Investigation involving patients with injuries in the visual nervous system are discussed. This led to the identification of the epithelial ganglion of the retina as a frequency filter. Threshold curves of the injured visual organs were compared with threshold curves obtained with a control group as a basis for identification. A model which considers the epithelial ganglion as a homogeneous cell layer in which adjacent neurons interact is discussed. It is shown the behavior of the cells against alternating exciting currents can be explained.

  20. Internal friction in enzyme reactions.

    PubMed

    Rauscher, Anna; Derényi, Imre; Gráf, László; Málnási-Csizmadia, András

    2013-01-01

    The empirical concept of internal friction was introduced 20 years ago. This review summarizes the results of experimental and theoretical studies that help to uncover the nature of internal friction. After the history of the concept, we describe the experimental challenges in measuring and interpreting internal friction based on the viscosity dependence of enzyme reactions. We also present speculations about the structural background of this viscosity dependence. Finally, some models about the relationship between the energy landscape and internal friction are outlined. Alternative concepts regarding the viscosity dependence of enzyme reactions are also discussed. Copyright © 2012 International Union of Biochemistry and Molecular Biology, Inc.

  1. Product line management in oncology: a Canadian experience.

    PubMed

    Wodinsky, H B; Egan, D; Markel, F

    1988-01-01

    More competition for finite resources and increasing regulation have led many hospitals to consider a strategic reorganization. Recently, one common reorganization strategy has been"product line management." Product line management can be broadly defined in terms of centralized program management, planning, and marketing strategies. In Canada, while strategic driving forces may be different, a product line management alternative has arisen in one of the most potentially complex product lines, cancer services. This article compares and contrasts the theoretical model for product line management development, with special reference to cancer services, to the experience of one Canadian medical center and cancer center.

  2. A Probabilistic, Dynamic, and Attribute-wise Model of Intertemporal Choice

    PubMed Central

    Dai, Junyi; Busemeyer, Jerome R.

    2014-01-01

    Most theoretical and empirical research on intertemporal choice assumes a deterministic and static perspective, leading to the widely adopted delay discounting models. As a form of preferential choice, however, intertemporal choice may be generated by a stochastic process that requires some deliberation time to reach a decision. We conducted three experiments to investigate how choice and decision time varied as a function of manipulations designed to examine the delay duration effect, the common difference effect, and the magnitude effect in intertemporal choice. The results, especially those associated with the delay duration effect, challenged the traditional deterministic and static view and called for alternative approaches. Consequently, various static or dynamic stochastic choice models were explored and fit to the choice data, including alternative-wise models derived from the traditional exponential or hyperbolic discount function and attribute-wise models built upon comparisons of direct or relative differences in money and delay. Furthermore, for the first time, dynamic diffusion models, such as those based on decision field theory, were also fit to the choice and response time data simultaneously. The results revealed that the attribute-wise diffusion model with direct differences, power transformations of objective value and time, and varied diffusion parameter performed the best and could account for all three intertemporal effects. In addition, the empirical relationship between choice proportions and response times was consistent with the prediction of diffusion models and thus favored a stochastic choice process for intertemporal choice that requires some deliberation time to make a decision. PMID:24635188

  3. Diagnostic uncertainty, guilt, mood, and disability in back pain.

    PubMed

    Serbic, Danijela; Pincus, Tamar; Fife-Schaw, Chris; Dawson, Helen

    2016-01-01

    In the majority of patients a definitive cause for low back pain (LBP) cannot be established, and many patients report feeling uncertain about their diagnosis, accompanied by guilt. The relationship between diagnostic uncertainty, guilt, mood, and disability is currently unknown. This study tested 3 theoretical models to explore possible pathways between these factors. In Model 1, diagnostic uncertainty was hypothesized to correlate with pain-related guilt, which in turn would positively correlate with depression, anxiety and disability. Two alternative models were tested: (a) a path from depression and anxiety to guilt, from guilt to diagnostic uncertainty, and finally to disability; (b) a model in which depression and anxiety, and independently, diagnostic uncertainty, were associated with guilt, which in turn was associated with disability. Structural equation modeling was employed on data from 413 participants with chronic LBP. All 3 models showed a reasonable-to-good fit with the data, with the 2 alternative models providing marginally better fit indices. Guilt, and especially social guilt, was associated with disability in all 3 models. Diagnostic uncertainty was associated with guilt, but only moderately. Low mood was also associated with guilt. Two newly defined factors, pain related guilt and diagnostic uncertainty, appear to be linked to disability and mood in people with LBP. The causal path of these links cannot be established in this cross sectional study. However, pain-related guilt especially appears to be important, and future research should examine whether interventions directly targeting guilt improve outcomes. (c) 2015 APA, all rights reserved).

  4. [A Methodological Quality Assessment of South Korean Nursing Research using Structural Equation Modeling in South Korea].

    PubMed

    Kim, Jung-Hee; Shin, Sujin; Park, Jin-Hwa

    2015-04-01

    The purpose of this study was to evaluate the methodological quality of nursing studies using structural equation modeling in Korea. Databases of KISS, DBPIA, and National Assembly Library up to March 2014 were searched using the MeSH terms 'nursing', 'structure', 'model'. A total of 152 studies were screened. After removal of duplicates and non-relevant titles, 61 papers were read in full. Of the sixty-one articles retrieved, 14 studies were published between 1992 and 2000, 27, between 2001 and 2010, and 20, between 2011 and March 2014. The methodological quality of the review examined varied considerably. The findings of this study suggest that more rigorous research is necessary to address theoretical identification, two indicator rule, distribution of sample, treatment of missing values, mediator effect, discriminant validity, convergent validity, post hoc model modification, equivalent models issues, and alternative models issues should be undergone. Further research with robust consistent methodological study designs from model identification to model respecification is needed to improve the validity of the research.

  5. A model-adaptivity method for the solution of Lennard-Jones based adhesive contact problems

    NASA Astrophysics Data System (ADS)

    Ben Dhia, Hachmi; Du, Shuimiao

    2018-05-01

    The surface micro-interaction model of Lennard-Jones (LJ) is used for adhesive contact problems (ACP). To address theoretical and numerical pitfalls of this model, a sequence of partitions of contact models is adaptively constructed to both extend and approximate the LJ model. It is formed by a combination of the LJ model with a sequence of shifted-Signorini (or, alternatively, -Linearized-LJ) models, indexed by a shift parameter field. For each model of this sequence, a weak formulation of the associated local ACP is developed. To track critical localized adhesive areas, a two-step strategy is developed: firstly, a macroscopic frictionless (as first approach) linear-elastic contact problem is solved once to detect contact separation zones. Secondly, at each shift-adaptive iteration, a micro-macro ACP is re-formulated and solved within the multiscale Arlequin framework, with significant reduction of computational costs. Comparison of our results with available analytical and numerical solutions shows the effectiveness of our global strategy.

  6. Experimental characterization of wingtip vortices in the near field using smoke flow visualizations

    NASA Astrophysics Data System (ADS)

    Serrano-Aguilera, J. J.; García-Ortiz, J. Hermenegildo; Gallardo-Claros, A.; Parras, L.; del Pino, C.

    2016-08-01

    In order to predict the axial development of the wingtip vortices strength, an accurate theoretical model is required. Several experimental techniques have been used to that end, e.g. PIV or hot-wire anemometry, but they imply a significant cost and effort. For this reason, we have performed experiments using the smoke-wire technique to visualize smoke streaks in six planes perpendicular to the main stream flow direction. Using this visualization technique, we obtained quantitative information regarding the vortex velocity field by means of Batchelor's model for two chord-based Reynolds numbers, Re_c=3.33× 10^4 and 10^5. Therefore, this theoretical vortex model has been introduced in the integration of ordinary differential equations which describe the temporal evolution of streak lines as function of two parameters: the swirl number, S, and the virtual axial origin, overline{z_0}. We have applied two different procedures to minimize the distance between experimental and theoretical flow patterns: individual curve fitting at six different control planes in the streamwise direction and the global curve fitting which corresponds to all the control planes simultaneously. Both sets of results have been compared with those provided by del Pino et al. (Phys Fluids 23(013):602, 2011b. doi: 10.1063/1.3537791), finding good agreement. Finally, we have observed a weak influence of the Reynolds number on the values S and overline{z_0} at low-to-moderate Re_c. This experimental technique is proposed as a low cost alternative to characterize wingtip vortices based on flow visualizations.

  7. Minimal models of electric potential oscillations in non-excitable membranes.

    PubMed

    Perdomo, Guillermo; Hernández, Julio A

    2010-01-01

    Sustained oscillations in the membrane potential have been observed in a variety of cellular and subcellular systems, including several types of non-excitable cells and mitochondria. For the plasma membrane, these electrical oscillations have frequently been related to oscillations in intracellular calcium. For the inner mitochondrial membrane, in several cases the electrical oscillations have been attributed to modifications in calcium dynamics. As an alternative, some authors have suggested that the sustained oscillations in the mitochondrial membrane potential induced by some metabolic intermediates depends on the direct effect of internal protons on proton conductance. Most theoretical models developed to interpret oscillations in the membrane potential integrate several transport and biochemical processes. Here we evaluate whether three simple dynamic models may constitute plausible representations of electric oscillations in non-excitable membranes. The basic mechanism considered in the derivation of the models is based upon evidence obtained by Hattori et al. for mitochondria and assumes that an ionic species (i.e., the proton) is transported via passive and active transport systems between an external and an internal compartment and that the ion affects the kinetic properties of transport by feedback regulation. The membrane potential is incorporated via its effects on kinetic properties. The dynamic properties of two of the models enable us to conclude that they may represent alternatives enabling description of the generation of electrical oscillations in membranes that depend on the transport of a single ionic species.

  8. A mathematical model captures the structure of subjective affect

    PubMed Central

    Mattek, Alison M.; Wolford, George; Whalen, Paul J.

    2016-01-01

    While it is possible to observe when another person is having an emotional moment, we also derive information about the affective states of others from what they tell us they are feeling. In an effort to distill the complexity of affective experience, psychologists routinely focus on a simplified subset of subjective rating scales (i.e., dimensions) that capture considerable variability in reported affect: reported valence (i.e., how good or bad?) and reported arousal (e.g., how strong is the emotion you are feeling?). Still, existing theoretical approaches address the basic organization and measurement of these affective dimensions differently. Some approaches organize affect around the dimensions of bipolar valence and arousal (e.g., the circumplex model; Russell, 1980), whereas alternative approaches organize affect around the dimensions of unipolar positivity and unipolar negativity (e.g., the bivariate evaluative model; Cacioppo & Berntson, 1994). In this report, we (1) replicate the data structure observed when collected according to the two approaches described above, and re-interpret these data to suggest that the relationship between each pair of affective dimensions is conditional on valence ambiguity; then (2) formalize this structure with a mathematical model depicting a valence ambiguity dimension that decreases in range as arousal decreases (a triangle). This model captures variability in affective ratings better than alternative approaches, increasing variance explained from ~60% to over 90% without adding parameters. PMID:28544868

  9. A dynamic bioenergetic model for coral-Symbiodinium symbioses and coral bleaching as an alternate stable state.

    PubMed

    Cunning, Ross; Muller, Erik B; Gates, Ruth D; Nisbet, Roger M

    2017-10-27

    Coral reef ecosystems owe their ecological success - and vulnerability to climate change - to the symbiotic metabolism of corals and Symbiodinium spp. The urgency to understand and predict the stability and breakdown of these symbioses (i.e., coral 'bleaching') demands the development and application of theoretical tools. Here, we develop a dynamic bioenergetic model of coral-Symbiodinium symbioses that demonstrates realistic steady-state patterns in coral growth and symbiont abundance across gradients of light, nutrients, and feeding. Furthermore, by including a mechanistic treatment of photo-oxidative stress, the model displays dynamics of bleaching and recovery that can be explained as transitions between alternate stable states. These dynamics reveal that "healthy" and "bleached" states correspond broadly to nitrogen- and carbon-limitation in the system, with transitions between them occurring as integrated responses to multiple environmental factors. Indeed, a suite of complex emergent behaviors reproduced by the model (e.g., bleaching is exacerbated by nutrients and attenuated by feeding) suggests it captures many important attributes of the system; meanwhile, its modular framework and open source R code are designed to facilitate further problem-specific development. We see significant potential for this modeling framework to generate testable hypotheses and predict integrated, mechanistic responses of corals to environmental change, with important implications for understanding the performance and maintenance of symbiotic systems. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. A diffusion modelling approach to understanding contextual cueing effects in children with ADHD

    PubMed Central

    Weigard, Alexander; Huang-Pollock, Cynthia

    2014-01-01

    Background Strong theoretical models suggest implicit learning deficits may exist among children with Attention Deficit Hyperactivity Disorder (ADHD). Method We examine implicit contextual cueing (CC) effects among children with ADHD (n=72) and non-ADHD Controls (n=36). Results Using Ratcliff’s drift diffusion model, we found that among Controls, the CC effect is due to improvements in attentional guidance and to reductions in response threshold. Children with ADHD did not show a CC effect; although they were able to use implicitly acquired information to deploy attentional focus, they had more difficulty adjusting their response thresholds. Conclusions Improvements in attentional guidance and reductions in response threshold together underlie the CC effect. Results are consistent with neurocognitive models of ADHD that posit sub-cortical dysfunction but intact spatial attention, and encourage the use of alternative data analytic methods when dealing with reaction time data. PMID:24798140

  11. Gravitational redshift of galaxies in clusters as predicted by general relativity.

    PubMed

    Wojtak, Radosław; Hansen, Steen H; Hjorth, Jens

    2011-09-28

    The theoretical framework of cosmology is mainly defined by gravity, of which general relativity is the current model. Recent tests of general relativity within the Lambda Cold Dark Matter (ΛCDM) model have found a concordance between predictions and the observations of the growth rate and clustering of the cosmic web. General relativity has not hitherto been tested on cosmological scales independently of the assumptions of the ΛCDM model. Here we report an observation of the gravitational redshift of light coming from galaxies in clusters at the 99 per cent confidence level, based on archival data. Our measurement agrees with the predictions of general relativity and its modification created to explain cosmic acceleration without the need for dark energy (the f(R) theory), but is inconsistent with alternative models designed to avoid the presence of dark matter. © 2011 Macmillan Publishers Limited. All rights reserved

  12. Rational heterodoxy: cholesterol reformation of the amyloid doctrine.

    PubMed

    Castello, Michael A; Soriano, Salvador

    2013-01-01

    According to the amyloid cascade hypothesis, accumulation of the amyloid peptide Aβ, derived by proteolytic processing from the amyloid precursor protein (APP), is the key pathogenic trigger in Alzheimer's disease (AD). This view has led researchers for more than two decades and continues to be the most influential model of neurodegeneration. Nevertheless, close scrutiny of the current evidence does not support a central pathogenic role for Aβ in late-onset AD. Furthermore, the amyloid cascade hypothesis lacks a theoretical foundation from which the physiological generation of Aβ can be understood, and therapeutic approaches based on its premises have failed. We present an alternative model of neurodegeneration, in which sustained cholesterol-associated neuronal distress is the most likely pathogenic trigger in late-onset AD, directly causing oxidative stress, inflammation and tau hyperphosphorylation. In this scenario, Aβ generation is part of an APP-driven adaptive response to the initial cholesterol distress, and its accumulation is neither central to, nor a requirement for, the initiation of the disease. Our model provides a theoretical framework that places APP as a regulator of cholesterol homeostasis, accounts for the generation of Aβ in both healthy and demented brains, and provides suitable targets for therapeutic intervention. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Relaxed selection is a precursor to the evolution of phenotypic plasticity.

    PubMed

    Hunt, Brendan G; Ometto, Lino; Wurm, Yannick; Shoemaker, DeWayne; Yi, Soojin V; Keller, Laurent; Goodisman, Michael A D

    2011-09-20

    Phenotypic plasticity allows organisms to produce alternative phenotypes under different conditions and represents one of the most important ways by which organisms adaptively respond to the environment. However, the relationship between phenotypic plasticity and molecular evolution remains poorly understood. We addressed this issue by investigating the evolution of genes associated with phenotypically plastic castes, sexes, and developmental stages of the fire ant Solenopsis invicta. We first determined if genes associated with phenotypic plasticity in S. invicta evolved at a rapid rate, as predicted under theoretical models. We found that genes differentially expressed between S. invicta castes, sexes, and developmental stages all exhibited elevated rates of evolution compared with ubiquitously expressed genes. We next investigated the evolutionary history of genes associated with the production of castes. Surprisingly, we found that orthologs of caste-biased genes in S. invicta and the social bee Apis mellifera evolved rapidly in lineages without castes. Thus, in contrast to some theoretical predictions, our results suggest that rapid rates of molecular evolution may not arise primarily as a consequence of phenotypic plasticity. Instead, genes evolving under relaxed purifying selection may more readily adopt new forms of biased expression during the evolution of alternate phenotypes. These results suggest that relaxed selective constraint on protein-coding genes is an important and underappreciated element in the evolutionary origin of phenotypic plasticity.

  14. The dynamics of magnetic nanoparticles exposed to non-heating alternating magnetic field in biochemical applications: theoretical study

    NASA Astrophysics Data System (ADS)

    Golovin, Yuri I.; Gribanovsky, Sergey L.; Golovin, Dmitry Y.; Zhigachev, Alexander O.; Klyachko, Natalia L.; Majouga, Alexander G.; Sokolsky, Marina; Kabanov, Alexander V.

    2017-02-01

    In the past decade, magneto-nanomechanical approach to biochemical systems stimulation has been studied intensively. This method involves macromolecule structure local deformation via mechanical actuation of functionalized magnetic nanoparticles (f-MNPs) by non-heating low frequency (LF) alternating magnetic field (AMF). Specificity at cellular or molecular level and spatial locality in nanometer scale are its key advantages as compared to magnetic fluid hyperthermia. However, current experimental studies have weak theoretical basis. Several models of magneto-nanomechanical actuation of macromolecules and cells in non-heating uniform LF AMF are presented in the article. Single core-shell spherical, rod-like, and Janus MNPs, as well as dimers consisting of two f-MNPs with macromolecules immobilized on their surfaces are considered. AMF-induced rotational oscillations of MNPs can affect properties and functioning of macromolecules or cellular membranes attached to them via periodic deformations in nanometer scale. This could be widely used in therapy, in particular for targeted drug delivery, controlled drug release, and cancer cell killing. An aggregate composed of MNPs can affect associated macromolecules by force up to several hundreds of piconewton in the case of MNPs of tens of nanometers in diameter and LF AMF below 1 T. AMF parameters and MNP design requirements for effective in vitro and in vivo magneto-nanomechanical treatment are presented.

  15. Alienation and Engagement: Development of an Alternative Theoretical Framework for Understanding Student Learning

    ERIC Educational Resources Information Center

    Case, Jennifer M.

    2008-01-01

    In this paper it is suggested that the themes of alienation and engagement offer a productive alternative perspective for characterising the student experience of learning in higher education, compared to current dominant perspectives such as that offered by approaches to learning and related concepts. A conceptual and historical background of the…

  16. Science, Systems, and Theoretical Alternatives in Educational Administration: The Road Less Travelled

    ERIC Educational Resources Information Center

    Evers, Colin W.; Lakomski, Gabriele

    2012-01-01

    Purpose: The purpose of this paper is to offer a critical reflection on ideas that have been published in the "Journal of Educational Administration" over the last 50 years that present perspectives on the nature of educational administration and its various aspects, that are alternatives to the mainstream systems-scientific view of…

  17. Cronbach's [Alpha], Revelle's [Beta], and McDonald's [Omega][sub H]: Their Relations with Each Other and Two Alternative Conceptualizations of Reliability

    ERIC Educational Resources Information Center

    Zinbarg, Richard E.; Revelle, William; Yovel, Iftah; Li, Wen

    2005-01-01

    We make theoretical comparisons among five coefficients--Cronbach's [alpha], Revelle's [beta], McDonald's [omega][sub h], and two alternative conceptualizations of reliability. Though many end users and psychometricians alike may not distinguish among these five coefficients, we demonstrate formally their nonequivalence. Specifically, whereas…

  18. On the importance of avoiding shortcuts in applying cognitive models to hierarchical data.

    PubMed

    Boehm, Udo; Marsman, Maarten; Matzke, Dora; Wagenmakers, Eric-Jan

    2018-06-12

    Psychological experiments often yield data that are hierarchically structured. A number of popular shortcut strategies in cognitive modeling do not properly accommodate this structure and can result in biased conclusions. To gauge the severity of these biases, we conducted a simulation study for a two-group experiment. We first considered a modeling strategy that ignores the hierarchical data structure. In line with theoretical results, our simulations showed that Bayesian and frequentist methods that rely on this strategy are biased towards the null hypothesis. Secondly, we considered a modeling strategy that takes a two-step approach by first obtaining participant-level estimates from a hierarchical cognitive model and subsequently using these estimates in a follow-up statistical test. Methods that rely on this strategy are biased towards the alternative hypothesis. Only hierarchical models of the multilevel data lead to correct conclusions. Our results are particularly relevant for the use of hierarchical Bayesian parameter estimates in cognitive modeling.

  19. Theoretical and Numerical Investigation of the Cavity Evolution in Gypsum Rock

    NASA Astrophysics Data System (ADS)

    Li, Wei; Einstein, Herbert H.

    2017-11-01

    When water flows through a preexisting cylindrical tube in gypsum rock, the nonuniform dissolution alters the tube into an enlarged tapered tube. A 2-D analytical model is developed to study the transport-controlled dissolution in an enlarged tapered tube, with explicit consideration of the tapered geometry and induced radial flow. The analytical model shows that the Graetz solution can be extended to model dissolution in the tapered tube. An alternative form of the governing equations is proposed to take advantage of the invariant quantities in the Graetz solution to facilitate modeling cavity evolution in gypsum rock. A 2-D finite volume model was developed to validate the extended Graetz solution. The time evolution of the transport-controlled and the reaction-controlled dissolution models for a single tube with time-invariant flow rate are compared. This comparison shows that for time-invariant flow rate, the reaction-controlled dissolution model produces a positive feedback between the tube enlargement and dissolution, while the transport-controlled dissolution does not.

  20. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459

  1. Higgs Boson Searches at Hadron Colliders (1/4)

    ScienceCinema

    Jakobs, Karl

    2018-05-21

    In these Academic Training lectures, the phenomenology of Higgs bosons and search strategies at hadron colliders are discussed. After a brief introduction on Higgs bosons in the Standard Model and a discussion of present direct and indirect constraints on its mass the status of the theoretical cross section calculations for Higgs boson production at hadron colliders is reviewed. In the following lectures important experimental issues relevant for Higgs boson searches (trigger, measurements of leptons, jets and missing transverse energy) are presented. This is followed by a detailed discussion of the discovery potential for the Standard Model Higgs boson for both the Tevatron and the LHC experiments. In addition, various scenarios beyond the Standard Model, primarily the MSSM, are considered. Finally, the potential and strategies to measured Higgs boson parameters and the investigation of alternative symmetry breaking scenarios are addressed.

  2. Bianisotropic-critical-state model to study flux cutting in type-II superconductors at parallel geometry

    NASA Astrophysics Data System (ADS)

    Romero-Salazar, C.

    2016-04-01

    A critical-state model is postulated that incorporates, for the first time, the structural anisotropy and flux-line cutting effect in a type-II superconductor. The model is constructed starting from the theoretical scheme of Romero-Salazar and Pérez-Rodríguez to study the anisotropy induced by flux cutting. Here, numerical calculations of the magnetic induction and static magnetization are presented for samples under an alternating magnetic field, orthogonal to a static dc-bias one. The interplay of the two anisotropies is analysed by comparing the numerical results with available experimental data for an yttrium barium copper oxide (YBCO) plate, and a vanadium-titanium (VTi) strip, subjected to a slowly oscillating field {H}y({H}z) in the presence of a static field {H}z({H}y).

  3. Flapping wing applied to wind generators

    NASA Astrophysics Data System (ADS)

    Colidiuc, Alexandra; Galetuse, Stelian; Suatean, Bogdan

    2012-11-01

    The new conditions at the international level for energy source distributions and the continuous increasing of energy consumption must lead to a new alternative resource with the condition of keeping the environment clean. This paper offers a new approach for a wind generator and is based on the theoretical aerodynamic model. This new model of wind generator helped me to test what influences would be if there will be a bird airfoil instead of a normal wind generator airfoil. The aim is to calculate the efficiency for the new model of wind generator. A representative direction for using the renewable energy is referred to the transformation of wind energy into electrical energy, with the help of wind turbines; the development of such systems lead to new solutions based on high efficiency, reduced costs and suitable to the implementation conditions.

  4. Quantitative study of electrophoretic and electroosmotic enhancement during alternating current iontophoresis across synthetic membranes.

    PubMed

    Yan, Guang; Li, S Kevin; Peck, Kendall D; Zhu, Honggang; Higuchi, William I

    2004-12-01

    One of the primary safety and tolerability limitations of direct current iontophoresis is the potential for electrochemical burns associated with the necessary current densities and/or application times required for effective treatment. Alternating current (AC) transdermal iontophoresis has the potential to eliminate electrochemical burns that are frequently observed during direct current transdermal iontophoresis. Although it has been demonstrated that the intrinsic permeability of skin can be increased by applying low-to-moderate AC voltages, transdermal transport phenomena and enhancement under AC conditions have not been systematically studied and are not well understood. The aim of the present work was to study the fundamental transport mechanisms of square-wave AC iontophoresis using a synthetic membrane system. The model synthetic membrane used was a composite Nuclepore membrane. AC frequencies ranging from 20 to 1000 Hz and AC fields ranging from 0.25 to 0.5 V/membrane were investigated. A charged permeant, tetraethyl ammonium, and a neutral permeant, arabinose, were used. The transport studies showed that flux was enhanced by increasing the AC voltage and decreasing AC frequency. Two theoretical transport models were developed: one is a homogeneous membrane model; the other is a heterogeneous membrane model. Experimental transport data were compared with computer simulations based on these models. Excellent agreement between model predictions and experimental data was observed when the data were compared with the simulations from the heterogeneous membrane model. (c) 2004 Wiley-Liss, Inc. and the American Pharmacists Association

  5. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    PubMed

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.

  6. Big Bang, inflation, standard Physics… and the potentialities of new Physics and alternative cosmologies. Present statuts of observational and experimental Cosmology. Open questions and potentialities of alternative cosmologies

    NASA Astrophysics Data System (ADS)

    Gonzalez-Mestres, Luis

    2016-11-01

    A year ago, we wrote [1] that the field of Cosmology was undergoing a positive and constructive crisis. The possible development of more direct links between the Mathematical Physics aspects of cosmological patterns and the interpretation of experimental and observational results was particularly emphasized. Controversies on inflation are not really new, but in any case inflation is not required in pre-Big Bang models and the validity of the standard Big Bang + inflation + ΛCDM pattern has not by now been demonstrated by data. Planck has even explicitly reported the existence of "anomalies". Remembering the far-reaching work of Yoichiro Nambu published in 1959-61, it seems legitimate to underline the need for a cross-disciplinary approach in the presence of deep, unsolved theoretical problems concerning new domains of matter properties and of the physical world. The physics of a possible preonic vacuum and the associated cosmology constitute one of these domains. If the vacuum is made of superluminal preons (superbradyons), and if standard particles are vacuum excitations, how to build a suitable theory to describe the internal structure of such a vacuum at both local and cosmic level? Experimental programs (South Pole, Atacama, AUGER, Telescope Array…) and observational ones (Planck, JEM-EUSO…) devoted to the study of cosmic microwave background radiation (CMB) and of ultra-high energy cosmic rays (UHECR) are crucial to elucidate such theoretical interrogations and guide new phenomenological developments. Together with a brief review of the observational and experimental situation, we also examine the main present theoretical and phenomenological problems and point out the role new physics and alternative cosmologies can potentially play. The need for data analyses less focused a priori on the standard models of Particle Physics and Cosmology is emphasized in this discussion. An example of a new approach to both fields is provided by the pre-Big Bang pattern based on a physical vacuum made of superbradyons with the spinorial space-time (SST) geometry we introduced in 1996-97. In particular, the SST automatically generates a local privileged space direction (PSD) for earch comoving observer and such a signature may have been confirmed by Planck data. Both superluminal preons and the existence of the PSD would have strong cosmological implications. Planck 2016 results will be particularly relevant as a step in the study of present open questions. This paper is dedicated to the memory of Yoichiro Nambu

  7. Excess capacity: markets regulation, and values.

    PubMed Central

    Madden, C W

    1999-01-01

    OBJECTIVE: To examine the conceptual bases for the conflicting views of excess capacity in healthcare markets and their application in the context of today's turbulent environment. STUDY SETTING: The policy and research literature of the past three decades. STUDY DESIGN: The theoretical perspectives of alternative economic schools of thought are used to support different policy positions with regard to excess capacity. Changes in these policy positions over time are linked to changes in the economic and political environment of the period. The social values implied by this history are articulated. DATA COLLECTION: Standard library search procedures are used to identify relevant literature. PRINCIPAL FINDINGS: Alternative policy views of excess capacity in healthcare markets rely on differing theoretical foundations. Changes in the context in which policy decisions are made over time affect the dominant theoretical framework and, therefore, the dominant policy view of excess capacity. CONCLUSIONS: In the 1990s, multiple perspectives of optimal capacity still exist. However, our evolving history suggests a set of persistent values that should guide future policy in this area. PMID:10029502

  8. A theory of leadership in human cooperative groups.

    PubMed

    Hooper, Paul L; Kaplan, Hillard S; Boone, James L

    2010-08-21

    Two types of models aim to account the origins of rank differentiation and social hierarchy in human societies. Conflict models suggest that the formation of social hierarchies is synonymous with the establishment of relationships of coercive social dominance and exploitation. Voluntary or 'integrative' models, on the other hand, suggest that rank differentiation--the differentiation of leader from follower, ruler from ruled, or state from subject--may sometimes be preferred over more egalitarian social arrangements as a solution to the challenges of life in social groups, such as conflict over resources, coordination failures, and free-riding in cooperative relationships. Little formal theoretical work, however, has established whether and under what conditions individuals would indeed prefer the establishment of more hierarchical relationships over more egalitarian alternatives. This paper provides an evolutionary game theoretical model for the acceptance of leadership in cooperative groups. We propose that the effort of a leader can reduce the likelihood that cooperation fails due to free-riding or coordination errors, and that under some circumstances, individuals would prefer to cooperate in a group under the supervision of a leader who receives a share of the group's productivity than to work in an unsupervised group. We suggest, in particular, that this becomes an optimal solution for individual decision makers when the number of group members required for collective action exceeds the maximum group size at which leaderless cooperation is viable.

  9. On comparison of net survival curves.

    PubMed

    Pavlič, Klemen; Perme, Maja Pohar

    2017-05-02

    Relative survival analysis is a subfield of survival analysis where competing risks data are observed, but the causes of death are unknown. A first step in the analysis of such data is usually the estimation of a net survival curve, possibly followed by regression modelling. Recently, a log-rank type test for comparison of net survival curves has been introduced and the goal of this paper is to explore its properties and put this methodological advance into the context of the field. We build on the association between the log-rank test and the univariate or stratified Cox model and show the analogy in the relative survival setting. We study the properties of the methods using both the theoretical arguments as well as simulations. We provide an R function to enable practical usage of the log-rank type test. Both the log-rank type test and its model alternatives perform satisfactory under the null, even if the correlation between their p-values is rather low, implying that both approaches cannot be used simultaneously. The stratified version has a higher power in case of non-homogeneous hazards, but also carries a different interpretation. The log-rank type test and its stratified version can be interpreted in the same way as the results of an analogous semi-parametric additive regression model despite the fact that no direct theoretical link can be established between the test statistics.

  10. Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making.

    PubMed

    Gawronski, Bertram; Armstrong, Joel; Conway, Paul; Friesdorf, Rebecca; Hütter, Mandy

    2017-09-01

    Research on moral dilemma judgments has been fundamentally shaped by the distinction between utilitarianism and deontology. According to the principle of utilitarianism, the moral status of behavioral options depends on their consequences; the principle of deontology states that the moral status of behavioral options depends on their consistency with moral norms. To identify the processes underlying utilitarian and deontological judgments, researchers have investigated responses to moral dilemmas that pit one principle against the other (e.g., trolley problem). However, the conceptual meaning of responses in this paradigm is ambiguous, because the central aspects of utilitarianism and deontology-consequences and norms-are not manipulated. We illustrate how this shortcoming undermines theoretical interpretations of empirical findings and describe an alternative approach that resolves the ambiguities of the traditional paradigm. Expanding on this approach, we present a multinomial model that allows researchers to quantify sensitivity to consequences (C), sensitivity to moral norms (N), and general preference for inaction versus action irrespective of consequences and norms (I) in responses to moral dilemmas. We present 8 studies that used this model to investigate the effects of gender, cognitive load, question framing, and psychopathy on moral dilemma judgments. The findings obtained with the proposed CNI model offer more nuanced insights into the determinants of moral dilemma judgments, calling for a reassessment of dominant theoretical assumptions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Characteristics of locomotion efficiency of an expanding-extending robotic endoscope in the intestinal environment.

    PubMed

    He, Shu; Yan, Guozheng; Wang, Zhiwu; Gao, Jinyang; Yang, Kai

    2015-07-01

    Robotic endoscopes with locomotion ability are among the most promising alternatives to traditional endoscopes; the locomotion ability is an important factor when evaluating the performance of the robot. This article describes the research on the characteristics of an expanding-extending robotic endoscope's locomotion efficiency in real intestine and explores an approach to improve the locomotion ability in this environment. In the article, the robot's locomotion efficiency was first calculated according to its gait in the gut, and the reasons for step losses were analyzed. Next, dynamical models of the robot and the intestine were built to calculate the step losses caused by failed anchoring and intestinal compression/extension. Based on the models and the calculation results, methods for reducing step losses were proposed. Finally, a series of ex vivo experiments were carried out, and the actual locomotion efficiency of the robot was analyzed on the basis of the theoretical models. In the experiment, on a level platform, the locomotion efficiency of the robot varied between 34.2% and 63.7%; the speed of the robot varied between 0.62 and 1.29 mm/s. The robot's efficiency when climbing a sloping intestine was also tested and analyzed. The proposed theoretical models and experimental results provide a good reference for improving the design of robotic endoscopy. © IMechE 2015.

  12. Defining the formative discharge for alternate bars in alluvial rivers

    NASA Astrophysics Data System (ADS)

    Redolfi, M.; Carlin, M.; Tubino, M.; Adami, L.; Zolezzi, G.

    2017-12-01

    We investigate the properties of alternate bars in long straight reaches of channelized streams subject to an unsteady, irregular flow regime. To this aim we propose a novel integration of a statistical approach with the analytical perturbation model of Tubino (1991) which predicts the evolution of bar properties (namely amplitude and wavelength) as consequence of a flood. The outcomes of our integrated modelling approach are probability distribution of the bar properties, which depend essentially on two ingredients: (i) the statistical properties of the flow regime (duration, frequency and magnitude of the flood events, and (ii) the reach-averaged hydro-geomorphic characteristics of the channel (bed material, channel gradient and width). This allows to define a "bar-forming" discharge value as the flow value which would reproduce the most likely bar properties in a river reach under unsteady flow. Alternate bars are often migrating downstream and growing or declining during flood events. The timescale of bar growth and migration is often comparable with the duration of the floods: consequently, bar properties such as height and wavelength do not respond instantaneously to discharge variations (i.e. quasi-equilibrium response) but may depend on previous flood events. Theoretical results are compared with observations in three Alpine, channelized gravel bed rivers with encouraging outcomes.

  13. Plasma-enhanced atomic layer deposition for plasmonic TiN

    NASA Astrophysics Data System (ADS)

    Otto, Lauren M.; Hammack, Aaron T.; Aloni, Shaul; Ogletree, D. Frank; Olynick, Deirdre L.; Dhuey, Scott; Stadler, Bethanie J. H.; Schwartzberg, Adam M.

    2016-09-01

    This work presents the low temperature plasma-enhanced atomic layer deposition (PE-ALD) of TiN, a promising plasmonic synthetic metal. The plasmonics community has immediate needs for alternatives to traditional plasmonic materials (e.g. Ag and Au), which lack chemical, thermal, and mechanical stability. Plasmonic alloys and synthetic metals have significantly improved stability, but their growth can require high-temperatures (>400 °C), and it is difficult to control the thickness and directionality of the resulting film, especially on technologically important substrates. Such issues prevent the application of alternative plasmonic materials for both fundamental studies and large-scale industrial applications. Alternatively, PE-ALD allows for conformal deposition on a variety of substrates with consistent material properties. This conformal coating will allow the creation of exotic three-dimensional structures, and low-temperature deposition techniques will provide unrestricted usage across a variety of platforms. The characterization of this new plasmonic material was performed with in-situ spectroscopic ellipsometry as well as Auger electron spectroscopy for analysis of TiN film sensitivity to oxide cross-contamination. Plasmonic TiN films were fabricated, and a chlorine plasma etch was found to pattern two dimensional gratings as a test structure. Optical measurements of 900 nm period gratings showed reasonable agreement with theoretical modeling of the fabricated structures, indicating that ellipsometry models of the TiN were indeed accurate.

  14. Using intervention mapping to deconstruct cognitive work hardening: a return-to-work intervention for people with depression.

    PubMed

    Wisenthal, Adeena; Krupa, Terry

    2014-12-12

    Mental health related work disability leaves are increasing at alarming rates with depression emerging as the most common mental disorder in the workforce. Treatments are available to alleviate depressive symptoms and associated functional impacts; however, they are not specifically aimed at preparing people to return to work. Cognitive work hardening (CWH) is a novel intervention that addresses this gap in the health care system. This paper presents a theoretical analysis of the components and underlying mechanisms of CWH using Intervention Mapping (IM) as a tool to deconstruct its elements. The cognitive sequelae of depression and their relevance to return-to-work (RTW) are examined together with interpersonal skills and other work-related competencies that affect work ability. IM, a tool typically used to create programs, is used to deconstruct an existing program, namely CWH, into its component parts and link them to theories and models in the literature. CWH has been deconstructed into intervention elements which are linked to program performance objectives through underlying theoretical models. In this way, linkages are made between tools and materials of the intervention and the overall program objective of 'successful RTW for people with depression'. An empirical study of the efficacy of CWH is currently underway which should provide added insight and understanding into this intervention. The application of IM to CWH illustrates the theoretical underpinnings of the treatment intervention and assists with better understanding the linkage between intervention elements and intervention objective. Applying IM to deconstruct an existing program (rather than create a program) presents an alternate application of the IM tool which can have implications for other programs in terms of enhancing understanding, grounding in theoretical foundations, communicating program design, and establishing a basis for program evaluation and improvement.

  15. Nature of the optical band shapes in polymethine dyes and H-aggregates: dozy chaos and excitons. Comparison with dimers, H*- and J-aggregates.

    PubMed

    Egorov, Vladimir V

    2017-05-01

    Results on the theoretical explanation of the shape of optical bands in polymethine dyes, their dimers and aggregates are summarized. The theoretical dependence of the shape of optical bands for the dye monomers in the vinylogous series in line with a change in the solvent polarity is considered. A simple physical (analytical) model of the shape of optical absorption bands in H-aggregates of polymethine dyes is developed based on taking the dozy-chaos dynamics of the transient state and the Frenkel exciton effect in the theory of molecular quantum transitions into account. As an example, the details of the experimental shape of one of the known H-bands are well reproduced by this analytical model under the assumption that the main optical chromophore of H-aggregates is a tetramer resulting from the two most probable processes of inelastic binary collisions in sequence: first, monomers between themselves, and then, between the resulting dimers. The obtained results indicate that in contrast with the compact structure of J-aggregates (brickwork structure), the structure of H-aggregates is not the compact pack-of-cards structure, as stated in the literature, but a loose alternate structure. Based on this theoretical model, a simple general (analytical) method for treating the more complex shapes of optical bands in polymethine dyes in comparison with the H-band under consideration is proposed. This method mirrors the physical process of molecular aggregates forming in liquid solutions: aggregates are generated in the most probable processes of inelastic multiple binary collisions between polymethine species generally differing in complexity. The results obtained are given against a background of the theoretical results on the shape of optical bands in polymethine dyes and their aggregates (dimers, H*- and J-aggregates) previously obtained by V.V.E.

  16. Nature of the optical band shapes in polymethine dyes and H-aggregates: dozy chaos and excitons. Comparison with dimers, H*- and J-aggregates

    PubMed Central

    2017-01-01

    Results on the theoretical explanation of the shape of optical bands in polymethine dyes, their dimers and aggregates are summarized. The theoretical dependence of the shape of optical bands for the dye monomers in the vinylogous series in line with a change in the solvent polarity is considered. A simple physical (analytical) model of the shape of optical absorption bands in H-aggregates of polymethine dyes is developed based on taking the dozy-chaos dynamics of the transient state and the Frenkel exciton effect in the theory of molecular quantum transitions into account. As an example, the details of the experimental shape of one of the known H-bands are well reproduced by this analytical model under the assumption that the main optical chromophore of H-aggregates is a tetramer resulting from the two most probable processes of inelastic binary collisions in sequence: first, monomers between themselves, and then, between the resulting dimers. The obtained results indicate that in contrast with the compact structure of J-aggregates (brickwork structure), the structure of H-aggregates is not the compact pack-of-cards structure, as stated in the literature, but a loose alternate structure. Based on this theoretical model, a simple general (analytical) method for treating the more complex shapes of optical bands in polymethine dyes in comparison with the H-band under consideration is proposed. This method mirrors the physical process of molecular aggregates forming in liquid solutions: aggregates are generated in the most probable processes of inelastic multiple binary collisions between polymethine species generally differing in complexity. The results obtained are given against a background of the theoretical results on the shape of optical bands in polymethine dyes and their aggregates (dimers, H*- and J-aggregates) previously obtained by V.V.E. PMID:28572984

  17. Nature of the optical band shapes in polymethine dyes and H-aggregates: dozy chaos and excitons. Comparison with dimers, H*- and J-aggregates

    NASA Astrophysics Data System (ADS)

    Egorov, Vladimir V.

    2017-05-01

    Results on the theoretical explanation of the shape of optical bands in polymethine dyes, their dimers and aggregates are summarized. The theoretical dependence of the shape of optical bands for the dye monomers in the vinylogous series in line with a change in the solvent polarity is considered. A simple physical (analytical) model of the shape of optical absorption bands in H-aggregates of polymethine dyes is developed based on taking the dozy-chaos dynamics of the transient state and the Frenkel exciton effect in the theory of molecular quantum transitions into account. As an example, the details of the experimental shape of one of the known H-bands are well reproduced by this analytical model under the assumption that the main optical chromophore of H-aggregates is a tetramer resulting from the two most probable processes of inelastic binary collisions in sequence: first, monomers between themselves, and then, between the resulting dimers. The obtained results indicate that in contrast with the compact structure of J-aggregates (brickwork structure), the structure of H-aggregates is not the compact pack-of-cards structure, as stated in the literature, but a loose alternate structure. Based on this theoretical model, a simple general (analytical) method for treating the more complex shapes of optical bands in polymethine dyes in comparison with the H-band under consideration is proposed. This method mirrors the physical process of molecular aggregates forming in liquid solutions: aggregates are generated in the most probable processes of inelastic multiple binary collisions between polymethine species generally differing in complexity. The results obtained are given against a background of the theoretical results on the shape of optical bands in polymethine dyes and their aggregates (dimers, H*- and J-aggregates) previously obtained by V.V.E.

  18. The energy cane alternative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, A.G.

    This book reviews the conceptual and theoretical background of Saccharum botany, which underlies the growing of cane as a total growth commodity. Management details are provided for energy cane planting, cultivation, harvest, and postharvest operations. Chapters on energy cane utilization stress new developments in lignocellulose conversion plus alternative options for fermentable solids usage. Chapters are also included for the management of alternative grasses to supplement energy cane, and the breeding of new hybrid canes with high biomass attributes at the intergeneric and interspecific levels.

  19. Equilibrium electrodeformation of a spheroidal vesicle in an ac electric field

    NASA Astrophysics Data System (ADS)

    Nganguia, H.; Young, Y.-N.

    2013-11-01

    In this work, we develop a theoretical model to explain the equilibrium spheroidal deformation of a giant unilamellar vesicle (GUV) under an alternating (ac) electric field. Suspended in a leaky dielectric fluid, the vesicle membrane is modeled as a thin capacitive spheroidal shell. The equilibrium vesicle shape results from the balance between mechanical forces from the viscous fluid, the restoring elastic membrane forces, and the externally imposed electric forces. Our spheroidal model predicts a deformation-dependent transmembrane potential, and is able to capture large deformation of a vesicle under an electric field. A detailed comparison against both experiments and small-deformation (quasispherical) theory showed that the spheroidal model gives better agreement with experiments in terms of the dependence on fluid conductivity ratio, permittivity ratio, vesicle size, electric field strength, and frequency. The spheroidal model also allows for an asymptotic analysis on the crossover frequency where the equilibrium vesicle shape crosses over between prolate and oblate shapes. Comparisons show that the spheroidal model gives better agreement with experimental observations.

  20. Asymmetries of Influence: Differential Effects of Body Postures on Perceptions of Emotional Facial Expressions

    PubMed Central

    Mondloch, Catherine J.; Nelson, Nicole L.; Horner, Matthew

    2013-01-01

    The accuracy and speed with which emotional facial expressions are identified is influenced by body postures. Two influential models predict that these congruency effects will be largest when the emotion displayed in the face is similar to that displayed in the body: the emotional seed model and the dimensional model. These models differ in whether similarity is based on physical characteristics or underlying dimensions of valence and arousal. Using a 3-alternative forced-choice task in which stimuli were presented briefly (Exp 1a) or for an unlimited time (Exp 1b) we provide evidence that congruency effects are more complex than either model predicts; the effects are asymmetrical and cannot be accounted for by similarity alone. Fearful postures are especially influential when paired with facial expressions, but not when presented in a flanker task (Exp 2). We suggest refinements to each model that may account for our results and suggest that additional studies be conducted prior to drawing strong theoretical conclusions. PMID:24039996

  1. Stereotypes Can “Get Under the Skin”: Testing a Self-Stereotyping and Psychological Resource Model of Overweight and Obesity

    PubMed Central

    Rivera, Luis M.; Paredez, Stefanie M.

    2014-01-01

    The authors draw upon social, personality, and health psychology to propose and test a self-stereotyping and psychological resource model of overweight and obesity. The model contends that self-stereotyping depletes psychological resources, namely self-esteem, that help to prevent overweight and obesity. In support of the model, mediation analysis demonstrates that adult Hispanics who highly self-stereotype had lower levels of self-esteem than those who self-stereotype less, which in turn predicted higher levels of body mass index (overweight and obesity levels). Furthermore, the model did not hold for the referent sample, White participants, and an alternative mediation model was not supported. These data are the first to theoretically and empirically link self-stereotyping and self-esteem (a psychological resource) with a strong physiological risk factor for morbidity and short life expectancy in stigmatized individuals. Thus, this research contributes to understanding ethnic-racial health disparities in the United States and beyond. PMID:25221353

  2. A penalized framework for distributed lag non-linear models.

    PubMed

    Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G

    2017-09-01

    Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  3. Perspectives in metabolic engineering: understanding cellular regulation towards the control of metabolic routes.

    PubMed

    Zadran, Sohila; Levine, Raphael D

    2013-01-01

    Metabolic engineering seeks to redirect metabolic pathways through the modification of specific biochemical reactions or the introduction of new ones with the use of recombinant technology. Many of the chemicals synthesized via introduction of product-specific enzymes or the reconstruction of entire metabolic pathways into engineered hosts that can sustain production and can synthesize high yields of the desired product as yields of natural product-derived compounds are frequently low, and chemical processes can be both energy and material expensive; current endeavors have focused on using biologically derived processes as alternatives to chemical synthesis. Such economically favorable manufacturing processes pursue goals related to sustainable development and "green chemistry". Metabolic engineering is a multidisciplinary approach, involving chemical engineering, molecular biology, biochemistry, and analytical chemistry. Recent advances in molecular biology, genome-scale models, theoretical understanding, and kinetic modeling has increased interest in using metabolic engineering to redirect metabolic fluxes for industrial and therapeutic purposes. The use of metabolic engineering has increased the productivity of industrially pertinent small molecules, alcohol-based biofuels, and biodiesel. Here, we highlight developments in the practical and theoretical strategies and technologies available for the metabolic engineering of simple systems and address current limitations.

  4. Bounded energy states in homogeneous turbulent shear flow: An alternative view

    NASA Technical Reports Server (NTRS)

    Bernard, Peter S.; Speziale, Charles G.

    1990-01-01

    The equilibrium structure of homogeneous turbulent shear flow is investigated from a theoretical standpoint. Existing turbulence models, in apparent agreement with physical and numerical experiments, predict an unbounded exponential time growth of the turbulent kinetic energy and dissipation rate; only the anisotropy tensor and turbulent time scale reach a structural equilibrium. It is shown that if vortex stretching is accounted for in the dissipation rate transport equation, then there can exist equilibrium solutions, with bounded energy states, where the turbulence production is balanced by its dissipation. Illustrative calculations are present for a k-epsilon model modified to account for vortex stretching. The calculations indicate an initial exponential time growth of the turbulent kinetic energy and dissipation rate for elapsed times that are as large as those considered in any of the previously conducted physical or numerical experiments on homogeneous shear flow. However, vortex stretching eventually takes over and forces a production-equals-dissipation equilibrium with bounded energy states. The validity of this result is further supported by an independent theoretical argument. It is concluded that the generally accepted structural equilibrium for homogeneous shear flow with unbounded component energies is in need of re-examination.

  5. Vorticity dipoles and a theoretical model of a finite force at the moving contact line singularity

    NASA Astrophysics Data System (ADS)

    Zhang, Peter; Devoria, Adam; Mohseni, Kamran

    2017-11-01

    In the well known works of Moffatt (1964) and Huh & Scriven (1971), an infinite force was reported at the moving contact line (MCL) and attributed to a non-integrable stress along the fluid-solid boundary. In our recent investigation of the boundary driven wedge, a model of the MCL, we find that the classical solution theoretically predicts a finite force at the contact line if the forces applied by the two boundaries that make up the corner are taken into consideration. Mathematically, this force can be obtained by the complex contour integral of the holomorphic vorticity-pressure function given by G = μω + ip . Alternatively, this force can also be found using a carefully defined real integral that incorporates the two boundaries. Motivated by this discovery, we have found that the rate of change in circulation, viscous energy dissipation, and viscous energy flux is also finite per unit contact line length. The analysis presented demonstrates that despite a singular stress and a relatively simple geometry, the no-slip semi-infinite wedge is capable of capturing some physical quantities of interest. Furthermore, this result provides a foundation for other challenging topics such as dynamic contact angle.

  6. Towards a neuro-computational account of prism adaptation.

    PubMed

    Petitet, Pierre; O'Reilly, Jill X; O'Shea, Jacinta

    2017-12-14

    Prism adaptation has a long history as an experimental paradigm used to investigate the functional and neural processes that underlie sensorimotor control. In the neuropsychology literature, prism adaptation behaviour is typically explained by reference to a traditional cognitive psychology framework that distinguishes putative functions, such as 'strategic control' versus 'spatial realignment'. This theoretical framework lacks conceptual clarity, quantitative precision and explanatory power. Here, we advocate for an alternative computational framework that offers several advantages: 1) an algorithmic explanatory account of the computations and operations that drive behaviour; 2) expressed in quantitative mathematical terms; 3) embedded within a principled theoretical framework (Bayesian decision theory, state-space modelling); 4) that offers a means to generate and test quantitative behavioural predictions. This computational framework offers a route towards mechanistic neurocognitive explanations of prism adaptation behaviour. Thus it constitutes a conceptual advance compared to the traditional theoretical framework. In this paper, we illustrate how Bayesian decision theory and state-space models offer principled explanations for a range of behavioural phenomena in the field of prism adaptation (e.g. visual capture, magnitude of visual versus proprioceptive realignment, spontaneous recovery and dynamics of adaptation memory). We argue that this explanatory framework can advance understanding of the functional and neural mechanisms that implement prism adaptation behaviour, by enabling quantitative tests of hypotheses that go beyond merely descriptive mapping claims that 'brain area X is (somehow) involved in psychological process Y'. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Riemannian theory of Hamiltonian chaos and Lyapunov exponents

    NASA Astrophysics Data System (ADS)

    Casetti, Lapo; Clementi, Cecilia; Pettini, Marco

    1996-12-01

    A nonvanishing Lyapunov exponent λ1 provides the very definition of deterministic chaos in the solutions of a dynamical system; however, no theoretical mean of predicting its value exists. This paper copes with the problem of analytically computing the largest Lyapunov exponent λ1 for many degrees of freedom Hamiltonian systems as a function of ɛ=E/N, the energy per degree of freedom. The functional dependence λ1(ɛ) is of great interest because, among other reasons, it detects the existence of weakly and strongly chaotic regimes. This aim, the analytic computation of λ1(ɛ), is successfully reached within a theoretical framework that makes use of a geometrization of Newtonian dynamics in the language of Riemannian differential geometry. An alternative point of view about the origin of chaos in these systems is obtained independently of the standard explanation based on homoclinic intersections. Dynamical instability (chaos) is here related to curvature fluctuations of the manifolds whose geodesics are natural motions and is described by means of the Jacobi-Levi-Civita equation (JLCE) for geodesic spread. In this paper it is shown how to derive from the JLCE an effective stability equation. Under general conditions, this effective equation formally describes a stochastic oscillator; an analytic formula for the instability growth rate of its solutions is worked out and applied to the Fermi-Pasta-Ulam β model and to a chain of coupled rotators. Excellent agreement is found between the theoretical prediction and numeric values of λ1(ɛ) for both models.

  8. Wave particle duality, the observer and retrocausality

    NASA Astrophysics Data System (ADS)

    Narasimhan, Ashok; Kafatos, Menas C.

    2017-05-01

    We approach wave particle duality, the role of the observer and implications on Retrocausality, by starting with the results of a well verified quantum experiment. We analyze how some current theoretical approaches interpret these results. We then provide an alternative theoretical framework that is consistent with the observations and in many ways simpler than usual attempts to account for retrocausality, involving a non-local conscious Observer.

  9. Evaluating quantitative and conceptual models of speech production: how does SLAM fare?

    PubMed

    Walker, Grant M; Hickok, Gregory

    2016-04-01

    In a previous publication, we presented a new computational model called SLAM (Walker & Hickok, Psychonomic Bulletin & Review doi: 10.3758/s13423-015-0903 ), based on the hierarchical state feedback control (HSFC) theory (Hickok Nature Reviews Neuroscience, 13(2), 135-145, 2012). In his commentary, Goldrick (Psychonomic Bulletin & Review doi: 10.3758/s13423-015-0946-9 ) claims that SLAM does not represent a theoretical advancement, because it cannot be distinguished from an alternative lexical + postlexical (LPL) theory proposed by Goldrick and Rapp (Cognition, 102(2), 219-260, 2007). First, we point out that SLAM implements a portion of a conceptual model (HSFC) that encompasses LPL. Second, we show that SLAM accounts for a lexical bias present in sound-related errors that LPL does not explain. Third, we show that SLAM's explanatory advantage is not a result of approximating the architectural or computational assumptions of LPL, since an implemented version of LPL fails to provide the same fit improvements as SLAM. Finally, we show that incorporating a mechanism that violates some core theoretical assumptions of LPL-making it more like SLAM in terms of interactivity-allows the model to capture some of the same effects as SLAM. SLAM therefore provides new modeling constraints regarding interactions among processing levels, while also elaborating on the structure of the phonological level. We view this as evidence that an integration of psycholinguistic, neuroscience, and motor control approaches to speech production is feasible and may lead to substantial new insights.

  10. On the stability of von Kármán rotating-disk boundary layers with radial anisotropic surface roughness

    NASA Astrophysics Data System (ADS)

    Garrett, S. J.; Cooper, A. J.; Harris, J. H.; Özkan, M.; Segalini, A.; Thomas, P. J.

    2016-01-01

    We summarise results of a theoretical study investigating the distinct convective instability properties of steady boundary-layer flow over rough rotating disks. A generic roughness pattern of concentric circles with sinusoidal surface undulations in the radial direction is considered. The goal is to compare predictions obtained by means of two alternative, and fundamentally different, modelling approaches for surface roughness for the first time. The motivating rationale is to identify commonalities and isolate results that might potentially represent artefacts associated with the particular methodologies underlying one of the two modelling approaches. The most significant result of practical relevance obtained is that both approaches predict overall stabilising effects on type I instability mode of rotating disk flow. This mode leads to transition of the rotating-disk boundary layer and, more generally, the transition of boundary-layers with a cross-flow profile. Stabilisation of the type 1 mode means that it may be possible to exploit surface roughness for laminar-flow control in boundary layers with a cross-flow component. However, we also find differences between the two sets of model predictions, some subtle and some substantial. These will represent criteria for establishing which of the two alternative approaches is more suitable to correctly describe experimental data when these become available.

  11. Diversity begets diversity: A global perspective on gender equality in scientific society leadership

    PubMed Central

    Burdfield-Steel, Emily; Potvin, Jacqueline M.; Heap, Stephen M.

    2018-01-01

    Research shows that gender inequality is still a major issue in academic science, yet academic societies may serve as underappreciated and effective avenues for promoting female leadership. That is, society membership is often self-selective, and board positions are elected (with a high turnover compared to institutions)—these characteristics, among others, may thus create an environment conducive to gender equality. We therefore investigate this potential using an information-theoretic approach to quantify gender equality (male:female ratios) in zoology society boards around the world. We compare alternative models to analyze how society characteristics might predict or correlate with the proportion of female leaders, and find that a cultural model, including society age, size of board and whether or not a society had an outward commitment or statement of equality, was the most informative predictor for the gender ratio of society boards and leadership positions. This model was more informative than alternatives that considered, for instance, geographic location, discipline of study or taxonomic focus. While women were more highly represented in society leadership than in institutional academic leadership, this representation was still far short of equal (~30%): we thus also provide a checklist and recommendations for societies to contribute to global gender equality in science. PMID:29847591

  12. Diversity begets diversity: A global perspective on gender equality in scientific society leadership.

    PubMed

    Potvin, Dominique A; Burdfield-Steel, Emily; Potvin, Jacqueline M; Heap, Stephen M

    2018-01-01

    Research shows that gender inequality is still a major issue in academic science, yet academic societies may serve as underappreciated and effective avenues for promoting female leadership. That is, society membership is often self-selective, and board positions are elected (with a high turnover compared to institutions)-these characteristics, among others, may thus create an environment conducive to gender equality. We therefore investigate this potential using an information-theoretic approach to quantify gender equality (male:female ratios) in zoology society boards around the world. We compare alternative models to analyze how society characteristics might predict or correlate with the proportion of female leaders, and find that a cultural model, including society age, size of board and whether or not a society had an outward commitment or statement of equality, was the most informative predictor for the gender ratio of society boards and leadership positions. This model was more informative than alternatives that considered, for instance, geographic location, discipline of study or taxonomic focus. While women were more highly represented in society leadership than in institutional academic leadership, this representation was still far short of equal (~30%): we thus also provide a checklist and recommendations for societies to contribute to global gender equality in science.

  13. Stimulus-Dependent State Transition between Synchronized Oscillation and Randomly Repetitive Burst in a Model Cerebellar Granular Layer

    PubMed Central

    Tanaka, Shigeru; Nagao, Soichi; Nishino, Tetsuro

    2011-01-01

    Information processing of the cerebellar granular layer composed of granule and Golgi cells is regarded as an important first step toward the cerebellar computation. Our previous theoretical studies have shown that granule cells can exhibit random alternation between burst and silent modes, which provides a basis of population representation of the passage-of-time (POT) from the onset of external input stimuli. On the other hand, another computational study has reported that granule cells can exhibit synchronized oscillation of activity, as consistent with observed oscillation in local field potential recorded from the granular layer while animals keep still. Here we have a question of whether an identical network model can explain these distinct dynamics. In the present study, we carried out computer simulations based on a spiking network model of the granular layer varying two parameters: the strength of a current injected to granule cells and the concentration of Mg2+ which controls the conductance of NMDA channels assumed on the Golgi cell dendrites. The simulations showed that cells in the granular layer can switch activity states between synchronized oscillation and random burst-silent alternation depending on the two parameters. For higher Mg2+ concentration and a weaker injected current, granule and Golgi cells elicited spikes synchronously (synchronized oscillation state). In contrast, for lower Mg2+ concentration and a stronger injected current, those cells showed the random burst-silent alternation (POT-representing state). It is suggested that NMDA channels on the Golgi cell dendrites play an important role for determining how the granular layer works in response to external input. PMID:21779155

  14. Bump hunting in LHC t t ¯ events

    NASA Astrophysics Data System (ADS)

    Czakon, Michal; Heymes, David; Mitov, Alexander

    2016-12-01

    We demonstrate that a purposefully normalized next-to-next-to-leading-order mt t ¯ differential spectrum can have very small theoretical uncertainty and, in particular, a small sensitivity to the top quark mass. Such an observable can thus be a very effective bump-hunting tool for resonances decaying to t t ¯ events during LHC run II and beyond. To illustrate how the approach works, we concentrate on one specific example of current interest, namely, the possible 750 GeV digamma excess resonance Φ . Considering only theoretical uncertainties, we demonstrate that it is possible to distinguish p p →Φ →t t ¯ signals studied in the recent literature [Hespel, Maltoni, and Vryonidou, J. High Energy Phys. 10 (2016) 016, 10.1007/JHEP10(2016)016] from the pure Standard Model background with very high significance. Alternatively, in the case of nonobservation, a strong upper limit on the decay rate Φ →t t ¯ can be placed.

  15. Bribe and Punishment: An Evolutionary Game-Theoretic Analysis of Bribery.

    PubMed

    Verma, Prateek; Sengupta, Supratim

    2015-01-01

    Harassment bribes, paid by citizens to corrupt officers for services the former are legally entitled to, constitute one of the most widespread forms of corruption in many countries. Nation states have adopted different policies to address this form of corruption. While some countries make both the bribe giver and the bribe taker equally liable for the crime, others impose a larger penalty on corrupt officers. We examine the consequences of asymmetric and symmetric penalties by developing deterministic and stochastic evolutionary game-theoretic models of bribery. We find that the asymmetric penalty scheme can lead to a reduction in incidents of bribery. However, the extent of reduction depends on how the players update their strategies over time. If the interacting members change their strategies with a probability proportional to the payoff of the alternative strategy option, the reduction in incidents of bribery is less pronounced. Our results indicate that changing from a symmetric to an asymmetric penalty scheme may not suffice in achieving significant reductions in incidents of harassment bribery.

  16. Bribe and Punishment: An Evolutionary Game-Theoretic Analysis of Bribery

    PubMed Central

    Verma, Prateek; Sengupta, Supratim

    2015-01-01

    Harassment bribes, paid by citizens to corrupt officers for services the former are legally entitled to, constitute one of the most widespread forms of corruption in many countries. Nation states have adopted different policies to address this form of corruption. While some countries make both the bribe giver and the bribe taker equally liable for the crime, others impose a larger penalty on corrupt officers. We examine the consequences of asymmetric and symmetric penalties by developing deterministic and stochastic evolutionary game-theoretic models of bribery. We find that the asymmetric penalty scheme can lead to a reduction in incidents of bribery. However, the extent of reduction depends on how the players update their strategies over time. If the interacting members change their strategies with a probability proportional to the payoff of the alternative strategy option, the reduction in incidents of bribery is less pronounced. Our results indicate that changing from a symmetric to an asymmetric penalty scheme may not suffice in achieving significant reductions in incidents of harassment bribery. PMID:26204110

  17. Early Warning Signals of Ecological Transitions: Methods for Spatial Patterns

    PubMed Central

    Brock, William A.; Carpenter, Stephen R.; Ellison, Aaron M.; Livina, Valerie N.; Seekell, David A.; Scheffer, Marten; van Nes, Egbert H.; Dakos, Vasilis

    2014-01-01

    A number of ecosystems can exhibit abrupt shifts between alternative stable states. Because of their important ecological and economic consequences, recent research has focused on devising early warning signals for anticipating such abrupt ecological transitions. In particular, theoretical studies show that changes in spatial characteristics of the system could provide early warnings of approaching transitions. However, the empirical validation of these indicators lag behind their theoretical developments. Here, we summarize a range of currently available spatial early warning signals, suggest potential null models to interpret their trends, and apply them to three simulated spatial data sets of systems undergoing an abrupt transition. In addition to providing a step-by-step methodology for applying these signals to spatial data sets, we propose a statistical toolbox that may be used to help detect approaching transitions in a wide range of spatial data. We hope that our methodology together with the computer codes will stimulate the application and testing of spatial early warning signals on real spatial data. PMID:24658137

  18. Modelling of flame propagation in the gasoline fuelled Wankel rotary engine with hydrogen additives

    NASA Astrophysics Data System (ADS)

    Fedyanov, E. A.; Zakharov, E. A.; Prikhodkov, K. V.; Levin, Y. V.

    2017-02-01

    Recently, hydrogen has been considered as an alternative fuel for a vehicles power unit. The Wankel engine is the most suitable to be adapted to hydrogen feeding. A hydrogen additive helps to decrease incompleteness of combustion in the volumes near the apex of the rotor. Results of theoretical researches of the hydrogen additives influence on the flame propagation in the combustion chamber of the Wankel rotary engine are presented. The theoretical research shows that the blend of 70% gasoline with 30% hydrogen could accomplish combustion near the T-apex in the stoichiometric mixture and in lean one. Maps of the flame front location versus the angle of rotor rotation and hydrogen fraction are obtained. Relations of a minimum required amount of hydrogen addition versus the engine speed are shown on the engine modes close to the average city driving cycle. The amount of hydrogen addition that could be injected by the nozzle with different flow sections is calculated in order to analyze the capacity of the feed system.

  19. Modulating resonance behaviors by noise recycling in bistable systems with time delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Zhongkui, E-mail: sunzk2008@gmail.com; Xu, Wei; Yang, Xiaoli

    In this paper, the impact of noise recycling on resonance behaviors is studied theoretically and numerically in a prototypical bistable system with delayed feedback. According to the interior cooperating and interacting activity of noise recycling, a theory has been proposed by reducing the non-Markovian problem into a two-state model, wherein both the master equation and the transition rates depend on not only the current state but also the earlier two states due to the recycling lag and the feedback delay. By virtue of this theory, the formulae of the power spectrum density and the linear response function have been foundmore » analytically. And the theoretical results are well verified by numerical simulations. It has been demonstrated that both the recycling lag and the feedback delay play a crucial role in the resonance behaviors. In addition, the results also suggest an alternative scheme to modulate or control the coherence or stochastic resonance in bistable systems with time delay.« less

  20. Quantitative genetic models of sexual conflict based on interacting phenotypes.

    PubMed

    Moore, Allen J; Pizzari, Tommaso

    2005-05-01

    Evolutionary conflict arises between reproductive partners when alternative reproductive opportunities are available. Sexual conflict can generate sexually antagonistic selection, which mediates sexual selection and intersexual coevolution. However, despite intense interest, the evolutionary implications of sexual conflict remain unresolved. We propose a novel theoretical approach to study the evolution of sexually antagonistic phenotypes based on quantitative genetics and the measure of social selection arising from male-female interactions. We consider the phenotype of one sex as both a genetically influenced evolving trait as well as the (evolving) social environment in which the phenotype of the opposite sex evolves. Several important points emerge from our analysis, including the relationship between direct selection on one sex and indirect effects through selection on the opposite sex. We suggest that the proposed approach may be a valuable tool to complement other theoretical approaches currently used to study sexual conflict. Most importantly, our approach highlights areas where additional empirical data can help clarify the role of sexual conflict in the evolutionary process.

  1. Sliding mode observers for automotive alternator

    NASA Astrophysics Data System (ADS)

    Chen, De-Shiou

    Estimator development for synchronous rectification of the automotive alternator is a desirable approach for estimating alternator's back electromotive forces (EMFs) without a direct mechanical sensor of the rotor position. Recent theoretical studies show that estimation of the back EMF may be observed based on system's phase current model by sensing electrical variables (AC phase currents and DC bus voltage) of the synchronous rectifier. Observer design of the back EMF estimation has been developed for constant engine speed. In this work, we are interested in nonlinear observer design of the back EMF estimation for the real case of variable engine speed. Initial back EMF estimate can be obtained from a first-order sliding mode observer (SMO) based on the phase current model. A fourth-order nonlinear asymptotic observer (NAO), complemented by the dynamics of the back EMF with time-varying frequency and amplitude, is then incorporated into the observer design for chattering reduction. Since the cost of required phase current sensors may be prohibitive, the most applicable approach in real implementation by measuring DC current of the synchronous rectifier is carried out in the dissertation. It is shown that the DC link current consists of sequential "windows" with partial information of the phase currents, hence, the cascaded NAO is responsible not only for the purpose of chattering reduction but also for necessarily accomplishing the process of estimation. Stability analyses of the proposed estimators are considered for most linear and time-varying cases. The stability of the NAO without speed information is substantiated by both numerical and experimental results. Prospective estimation algorithms for the case of battery current measurements are investigated. Theoretical study indicates that the convergence of the proposed LAO may be provided by high gain inputs. Since the order of the LAO/NAO for the battery current case is one order higher than that of the link current measurements, it is hard to find moderate values of the input gains for the real-time sampled-data systems. Technical difficulties in implementation of such high order discrete-time nonlinear estimators have been discussed. Directions of further investigations have been provided.

  2. Genetic Programming for Automatic Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Chadalawada, Jayashree; Babovic, Vladan

    2017-04-01

    One of the recent challenges for the hydrologic research community is the need for the development of coupled systems that involves the integration of hydrologic, atmospheric and socio-economic relationships. This poses a requirement for novel modelling frameworks that can accurately represent complex systems, given, the limited understanding of underlying processes, increasing volume of data and high levels of uncertainity. Each of the existing hydrological models vary in terms of conceptualization and process representation and is the best suited to capture the environmental dynamics of a particular hydrological system. Data driven approaches can be used in the integration of alternative process hypotheses in order to achieve a unified theory at catchment scale. The key steps in the implementation of integrated modelling framework that is influenced by prior understanding and data, include, choice of the technique for the induction of knowledge from data, identification of alternative structural hypotheses, definition of rules, constraints for meaningful, intelligent combination of model component hypotheses and definition of evaluation metrics. This study aims at defining a Genetic Programming based modelling framework that test different conceptual model constructs based on wide range of objective functions and evolves accurate and parsimonious models that capture dominant hydrological processes at catchment scale. In this paper, GP initializes the evolutionary process using the modelling decisions inspired from the Superflex framework [Fenicia et al., 2011] and automatically combines them into model structures that are scrutinized against observed data using statistical, hydrological and flow duration curve based performance metrics. The collaboration between data driven and physical, conceptual modelling paradigms improves the ability to model and manage hydrologic systems. Fenicia, F., D. Kavetski, and H. H. Savenije (2011), Elements of a flexible approach for conceptual hydrological modeling: 1. Motivation and theoretical development, Water Resources Research, 47(11).

  3. Development and construct validity of the Classroom Strategies Scale-Observer Form.

    PubMed

    Reddy, Linda A; Fabiano, Gregory; Dudek, Christopher M; Hsu, Louis

    2013-12-01

    Research on progress monitoring has almost exclusively focused on student behavior and not on teacher practices. This article presents the development and validation of a new teacher observational assessment (Classroom Strategies Scale) of classroom instructional and behavioral management practices. The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. The Classroom Strategies Scale (CSS) evidenced overall good reliability estimates including internal consistency, interrater reliability, test-retest reliability, and freedom from item bias on important teacher demographics (age, educational degree, years of teaching experience). Confirmatory factor analyses (CFAs) of CSS data from 317 classrooms were carried out to assess the level of empirical support for (a) a 4 first-order factor theory concerning teachers' instructional practices, and (b) a 4 first-order factor theory concerning teachers' behavior management practice. Several fit indices indicated acceptable fit of the (a) and (b) CFA models to the data, as well as acceptable fit of less parsimonious alternative CFA models that included 1 or 2 second-order factors. Information-theory-based indices generally suggested that the (a) and (b) CFA models fit better than some more parsimonious alternative CFA models that included constraints on relations of first-order factors. Overall, CFA first-order and higher order factor results support the CSS-Observer Total, Composite, and subscales. Suggestions for future measurement development efforts are outlined. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. Assessment of dependency, agreeableness, and their relationship.

    PubMed

    Lowe, Jennifer Ruth; Edmundson, Maryanne; Widiger, Thomas A

    2009-12-01

    Agreeableness is central to the 5-factor model conceptualization of dependency. However, 4 meta-analyses of the relationship of agreeableness with dependency have failed to identify a consistent relationship. It was the hypothesis of the current study that these findings might be due in part to an emphasis on the assessment of adaptive, rather than maladaptive, variants of agreeableness. This hypothesis was tested by using experimentally altered NEO Personality Inventory-Revised (Costa & McCrae, 1992) items that were reversed with respect to their implications for maladaptiveness. The predicted correlations were confirmed with the experimentally altered version with measures of dependent personality disorder, measures of trait dependency (including 2 measures of adaptive dependency), and measures of dependency from alternative dimensional models of personality disorder. The theoretical implications of the findings and suggestions for future research are discussed.

  5. At the intersection of micro and macro: opportunities and challenges for physician-patient communication research.

    PubMed

    Cline, Rebecca J Welch

    2003-05-01

    The health care relationship model is undergoing dramatic change. Micro-level communication patterns yield health care relationship models (e.g. paternalism, mutual participation, consumerism). At the same time, macro-level systems appear increasingly likely to influence the nature of micro-level interaction. The intersections of health care communication micro-level and macro-level phenomena provide important venues for research and interventions. This essay identifies theoretical premises regarding the relationships between communication and health-related behavior; explores three prominent and growing macro-level phenomena that observers argue likely influence the physical-patient relationship and communication therein: complementary and alternative medicine, the Internet, and direct-to-consumer advertising of prescription drugs; and offers a research agenda for exploring macro-level influences on micro-level physician-patient communication.

  6. Econophysics: A challenge to econometricians

    NASA Astrophysics Data System (ADS)

    Zapart, Christopher A.

    2015-02-01

    The study contrasts mainstream economics-operating on time scales of hours and days-with behavioural finance, econophysics and high-frequency trading, more applicable to short-term time scales of the order of minutes and seconds. We show how the central theoretical assumption underpinning prevailing economic theories is violated on small time scales. We also demonstrate how an alternative behavioural econophysics can model reactions of market participants to short-term movements in foreign exchange markets and, in a direct contradiction of the orthodox economics, design a rudimentary IsingFX automated trading system. By replacing costly human forex dealers with banks of Field-Programmable Gate Array (FPGA) devices that implement in hardware high-frequency behavioural trading models of the type described here, brokerages and forex liquidity providers can expect to gain significant reductions in operating costs.

  7. Estimation of species extinction: what are the consequences when total species number is unknown?

    PubMed

    Chen, Youhua

    2014-12-01

    The species-area relationship (SAR) is known to overestimate species extinction but the underlying mechanisms remain unclear to a great extent. Here, I show that when total species number in an area is unknown, the SAR model exaggerates the estimation of species extinction. It is proposed that to accurately estimate species extinction caused by habitat destruction, one of the principal prerequisites is to accurately total the species numbers presented in the whole study area. One can better evaluate and compare alternative theoretical SAR models on the accurate estimation of species loss only when the exact total species number for the whole area is clear. This presents an opportunity for ecologists to simulate more research on accurately estimating Whittaker's gamma diversity for the purpose of better predicting species loss.

  8. Damage Mechanics Approach to Penetration of Water-filled Surface Crevasses

    NASA Astrophysics Data System (ADS)

    Duddu, R.; Jimenez, S. K.; Bassis, J. N.

    2017-12-01

    Iceberg calving is a natural process that occurs when crevasses penetrate the entire thickness of an ice shelf or a glacier leading to the detachment (birth) of icebergs. Calving from marine-terminating glaciers and floating ice shelves accounts for nearly 50% of the mass lost from both the Greenland and Antarctic ice sheets, which can directly or indirectly contribute to sealevel rise. A widely-accepted hypothesis is that crevasses in ice form due to brittle mode I fracture under the action of tensile stresses. Existing theoretical approaches for modeling crevasse propagation based on the above hypothesis include the Nye zero stress and fracture mechanics approaches. These theoretical approaches assume idealized geometry and boundary conditions, and ignore the effects of viscous creep deformations in ice over longer time scales; however, they still produced interesting results that matched well with sparse field observations available. An alternative is to use the continuum damage mechanics approach for modeling crevasse propagation, which is more easily incorporated into numerical ice sheet models that consider realistic geometries, boundary conditions and viscous creep effects. In this presentation, we describe the damage mechanics approach to penetration of dry and water-filled surface crevasses using the principles of poromechanics and compare our results with those from existing theoretical approaches. We investigate the upper limits on crevasse penetration depth in relation to ice thickness, water depth in the surface crevasse, seawater depth at the ice terminus and ice rheology (i.e., elastic vs. viscous). Our studies on idealized glaciers show that the damage mechanics approach is consistent with the fracture mechanics approach when the seawater depth at the ice terminus is low, but is inconsistent with the theoretical approaches when the seawater depth at the ice terminus is high (i.e., near floatation). Our studies also indicate that the upper limit on surface crevasse penetration depth is minimally sensitive to ice rheology when glacier geometry changes are ignored. However, viscous flow can cause geometry changes and induce stresses (e.g., due to bending) leading to deeper crevasse penetration in numerical ice sheet models.

  9. Breaking Through the Glass Ceiling: Recent Experimental Approaches to Probe the Properties of Supercooled Liquids near the Glass Transition.

    PubMed

    Smith, R Scott; Kay, Bruce D

    2012-03-15

    Experimental measurements of the properties of supercooled liquids at temperatures near their glass transition temperatures, Tg, are requisite for understanding the behavior of glasses and amorphous solids. Unfortunately, many supercooled molecular liquids rapidly crystallize at temperatures far above their Tg, making such measurements difficult to nearly impossible. In this Perspective, we discuss some recent alternative approaches to obtain experimental data in the temperature regime near Tg. These new approaches may yield the additional experimental data necessary to test current theoretical models of the dynamical slowdown that occurs in supercooled liquids approaching the glass transition.

  10. Calculating binding free energies for protein-carbohydrate complexes.

    PubMed

    Hadden, Jodi A; Tessier, Matthew B; Fadda, Elisa; Woods, Robert J

    2015-01-01

    A variety of computational techniques may be applied to compute theoretical binding free energies for protein-carbohydrate complexes. Elucidation of the intermolecular interactions, as well as the thermodynamic effects, that contribute to the relative strength of receptor binding can shed light on biomolecular recognition, and the resulting initiation or inhibition of a biological process. Three types of free energy methods are discussed here, including MM-PB/GBSA, thermodynamic integration, and a non-equilibrium alternative utilizing SMD. Throughout this chapter, the well-known concanavalin A lectin is employed as a model system to demonstrate the application of these methods to the special case of carbohydrate binding.

  11. Choosing the best index for the average score intraclass correlation coefficient.

    PubMed

    Shieh, Gwowen

    2016-09-01

    The intraclass correlation coefficient (ICC)(2) index from a one-way random effects model is widely used to describe the reliability of mean ratings in behavioral, educational, and psychological research. Despite its apparent utility, the essential property of ICC(2) as a point estimator of the average score intraclass correlation coefficient is seldom mentioned. This article considers several potential measures and compares their performance with ICC(2). Analytical derivations and numerical examinations are presented to assess the bias and mean square error of the alternative estimators. The results suggest that more advantageous indices can be recommended over ICC(2) for their theoretical implication and computational ease.

  12. Set points, settling points and some alternative models: theoretical options to understand how genes and environments combine to regulate body adiposity

    PubMed Central

    Speakman, John R.; Levitsky, David A.; Allison, David B.; Bray, Molly S.; de Castro, John M.; Clegg, Deborah J.; Clapham, John C.; Dulloo, Abdul G.; Gruer, Laurence; Haw, Sally; Hebebrand, Johannes; Hetherington, Marion M.; Higgs, Susanne; Jebb, Susan A.; Loos, Ruth J. F.; Luckman, Simon; Luke, Amy; Mohammed-Ali, Vidya; O’Rahilly, Stephen; Pereira, Mark; Perusse, Louis; Robinson, Tom N.; Rolls, Barbara; Symonds, Michael E.; Westerterp-Plantenga, Margriet S.

    2011-01-01

    The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy) to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled. PMID:22065844

  13. Probing particle and nuclear physics models of neutrinoless double beta decay with different nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fogli, G. L.; Rotunno, A. M.; Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Via Orabona 4, 70126 Bari

    2009-07-01

    Half-life estimates for neutrinoless double beta decay depend on particle physics models for lepton-flavor violation, as well as on nuclear physics models for the structure and transitions of candidate nuclei. Different models considered in the literature can be contrasted - via prospective data - with a 'standard' scenario characterized by light Majorana neutrino exchange and by the quasiparticle random phase approximation, for which the theoretical covariance matrix has been recently estimated. We show that, assuming future half-life data in four promising nuclei ({sup 76}Ge, {sup 82}Se, {sup 130}Te, and {sup 136}Xe), the standard scenario can be distinguished from a fewmore » nonstandard physics models, while being compatible with alternative state-of-the-art nuclear calculations (at 95% C.L.). Future signals in different nuclei may thus help to discriminate at least some decay mechanisms, without being spoiled by current nuclear uncertainties. Prospects for possible improvements are also discussed.« less

  14. Structure and mechanism of diet specialisation: testing models of individual variation in resource use with sea otters

    USGS Publications Warehouse

    Tinker, M. Tim; Guimarães, Paulo R.; Novak, Mark; Marquitti, Flavia Maria Darcie; Bodkin, James L.; Staedler, Michelle; Bentall, Gena B.; Estes, James A.

    2012-01-01

    Studies of consumer-resource interactions suggest that individual diet specialisation is empirically widespread and theoretically important to the organisation and dynamics of populations and communities. We used weighted networks to analyze the resource use by sea otters, testing three alternative models for how individual diet specialisation may arise. As expected, individual specialisation was absent when otter density was low, but increased at high-otter density. A high-density emergence of nested resource-use networks was consistent with the model assuming individuals share preference ranks. However, a density-dependent emergence of a non-nested modular network for ‘core’ resources was more consistent with the ‘competitive refuge’ model. Individuals from different diet modules showed predictable variation in rank-order prey preferences and handling times of core resources, further supporting the competitive refuge model. Our findings support a hierarchical organisation of diet specialisation and suggest individual use of core and marginal resources may be driven by different selective pressures.

  15. Atmospheric Fragmentation of the Canyon Diablo Meteoroid

    NASA Technical Reports Server (NTRS)

    Pierazzo, E.; Artemieva, N. A.

    2005-01-01

    About 50 kyr ago the impact of an iron meteoroid excavated Meteor Crater, Arizona, the first terrestrial structure widely recognized as a meteorite impact crater. Recent studies of ballistically dispersed impact melts from Meteor Crater indicate a compositionally unusually heterogeneous impact melt with high SiO2 and exceptionally high (10 to 25% on average) levels of projectile contamination. These are observations that must be explained by any theoretical modeling of the impact event. Simple atmospheric entry models for an iron meteorite similar to Canyon Diablo indicate that the surface impact speed should have been around 12 km/s [Melosh, personal comm.], not the 15-20 km/s generally assumed in previous impact models. This may help explaining the unusual characteristics of the impact melt at Meteor Crater. We present alternative initial estimates of the motion in the atmosphere of an iron projectile similar to Canyon Diablo, to constraint the initial conditions of the impact event that generated Meteor Crater.

  16. A Nonlinear Diffusion Equation-Based Model for Ultrasound Speckle Noise Removal

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenyu; Guo, Zhichang; Zhang, Dazhi; Wu, Boying

    2018-04-01

    Ultrasound images are contaminated by speckle noise, which brings difficulties in further image analysis and clinical diagnosis. In this paper, we address this problem in the view of nonlinear diffusion equation theories. We develop a nonlinear diffusion equation-based model by taking into account not only the gradient information of the image, but also the information of the gray levels of the image. By utilizing the region indicator as the variable exponent, we can adaptively control the diffusion type which alternates between the Perona-Malik diffusion and the Charbonnier diffusion according to the image gray levels. Furthermore, we analyze the proposed model with respect to the theoretical and numerical properties. Experiments show that the proposed method achieves much better speckle suppression and edge preservation when compared with the traditional despeckling methods, especially in the low gray level and low-contrast regions.

  17. A Reconstruction Method for the Estimation of Temperatures of Multiple Sources Applied for Nanoparticle-Mediated Hyperthermia.

    PubMed

    Steinberg, Idan; Tamir, Gil; Gannot, Israel

    2018-03-16

    Solid malignant tumors are one of the leading causes of death worldwide. Many times complete removal is not possible and alternative methods such as focused hyperthermia are used. Precise control of the hyperthermia process is imperative for the successful application of such treatment. To that end, this research presents a fast method that enables the estimation of deep tissue heat distribution by capturing and processing the transient temperature at the boundary based on a bio-heat transfer model. The theoretical model is rigorously developed and thoroughly validated by a series of experiments. A 10-fold improvement is demonstrated in resolution and visibility on tissue mimicking phantoms. The inverse problem is demonstrated as well with a successful application of the model for imaging deep-tissue embedded heat sources. Thereby, allowing the physician then ability to dynamically evaluate the hyperthermia treatment efficiency in real time.

  18. Social modernization and the increase in the divorce rate.

    PubMed

    Esser, H

    1993-03-01

    The author develops a micro-model of marital interactions that is used to analyze factors affecting the divorce rate in modern industrialized societies. The core of the model is the concept of production of marital gain and mutual control of this production. "The increase of divorce rates, then, is explained by a steady decrease of institutional and social embeddedness, which helps to solve this kind of an 'assurance game.' The shape of the individual risk is explained by the typical form of change of the 'production functions' of marriages within the first period of adaptation. The inconsistent results concerning womens' labor market participation in linear regression models are explained as a consequence of the (theoretical and statistical) 'interaction' of decreases in embeddedness and increases in external alternatives for women." Comments are included by Karl-Dieter Opp (pp. 278-82) and Ulrich Witt (pp. 283-5). excerpt

  19. Democracy and sustainable development--what is the alternative to cost-benefit analysis?

    PubMed

    Söderbaum, Peter

    2006-04-01

    Cost-benefit analysis (CBA) is part of neoclassical economics, a specific paradigm, or theoretical perspective. In searching for alternatives to CBA, competing theoretical frameworks in economics appear to be a natural starting point. Positional analysis (PA) as an alternative to CBA is built on institutional theory and a different set of assumptions about human beings, organizations, markets, etc. Sustainable development (SD) is a multidimensional concept that includes social and ecological dimensions in addition to monetary aspects. If the political commitment to SD in the European Union and elsewhere is taken seriously, then approaches to decision making should be chosen that 1st open the door for multidimensional analysis rather than close it. Sustainable development suggests a direction for development in a broad sense but is still open to different interpretations. Each such interpretation is political in kind, and a 2nd criterion for judging different approaches is whether they are ideologically open rather than closed. Although methods for decision making have traditionally been connected with mathematical objective functions and optimization, the purpose of PA is to illuminate a decision situation in a many-sided way with respect to possibly relevant ideological orientations, alternatives, and consequences. Decisions are understood in terms of matching the ideological orientation of each decision maker with the expected effects profile of each alternative considered. Appropriateness and pattern recognition are other concepts in understanding this process.

  20. Methylene blue binding to DNA with alternating AT base sequence: minor groove binding is favored over intercalation.

    PubMed

    Rohs, Remo; Sklenar, Heinz

    2004-04-01

    The results presented in this paper on methylene blue (MB) binding to DNA with AT alternating base sequence complement the data obtained in two former modeling studies of MB binding to GC alternating DNA. In the light of the large amount of experimental data for both systems, this theoretical study is focused on a detailed energetic analysis and comparison in order to understand their different behavior. Since experimental high-resolution structures of the complexes are not available, the analysis is based on energy minimized structural models of the complexes in different binding modes. For both sequences, four different intercalation structures and two models for MB binding in the minor and major groove have been proposed. Solvent electrostatic effects were included in the energetic analysis by using electrostatic continuum theory, and the dependence of MB binding on salt concentration was investigated by solving the non-linear Poisson-Boltzmann equation. We find that the relative stability of the different complexes is similar for the two sequences, in agreement with the interpretation of spectroscopic data. Subtle differences, however, are seen in energy decompositions and can be attributed to the change from symmetric 5'-YpR-3' intercalation to minor groove binding with increasing salt concentration, which is experimentally observed for the AT sequence at lower salt concentration than for the GC sequence. According to our results, this difference is due to the significantly lower non-electrostatic energy for the minor groove complex with AT alternating DNA, whereas the slightly lower binding energy to this sequence is caused by a higher deformation energy of DNA. The energetic data are in agreement with the conclusions derived from different spectroscopic studies and can also be structurally interpreted on the basis of the modeled complexes. The simple static modeling technique and the neglect of entropy terms and of non-electrostatic solute-solvent interactions, which are assumed to be nearly constant for the compared complexes of MB with DNA, seem to be justified by the results.

  1. An alternative covariance estimator to investigate genetic heterogeneity in populations.

    PubMed

    Heslot, Nicolas; Jannink, Jean-Luc

    2015-11-26

    For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative covariance estimator can be used to gain insight into trait-specific genetic heterogeneity by identifying relevant sub-populations that lack genetic correlation between them. Genetic correlation can be 0 between identified sub-populations by performing automatic selection of relevant sets of individuals to be included in the training population. It may also increase statistical power in GWAS.

  2. The importance of functional form in optimal control solutions of problems in population dynamics

    USGS Publications Warehouse

    Runge, M.C.; Johnson, F.A.

    2002-01-01

    Optimal control theory is finding increased application in both theoretical and applied ecology, and it is a central element of adaptive resource management. One of the steps in an adaptive management process is to develop alternative models of system dynamics, models that are all reasonable in light of available data, but that differ substantially in their implications for optimal control of the resource. We explored how the form of the recruitment and survival functions in a general population model for ducks affected the patterns in the optimal harvest strategy, using a combination of analytical, numerical, and simulation techniques. We compared three relationships between recruitment and population density (linear, exponential, and hyperbolic) and three relationships between survival during the nonharvest season and population density (constant, logistic, and one related to the compensatory harvest mortality hypothesis). We found that the form of the component functions had a dramatic influence on the optimal harvest strategy and the ultimate equilibrium state of the system. For instance, while it is commonly assumed that a compensatory hypothesis leads to higher optimal harvest rates than an additive hypothesis, we found this to depend on the form of the recruitment function, in part because of differences in the optimal steady-state population density. This work has strong direct consequences for those developing alternative models to describe harvested systems, but it is relevant to a larger class of problems applying optimal control at the population level. Often, different functional forms will not be statistically distinguishable in the range of the data. Nevertheless, differences between the functions outside the range of the data can have an important impact on the optimal harvest strategy. Thus, development of alternative models by identifying a single functional form, then choosing different parameter combinations from extremes on the likelihood profile may end up producing alternatives that do not differ as importantly as if different functional forms had been used. We recommend that biological knowledge be used to bracket a range of possible functional forms, and robustness of conclusions be checked over this range.

  3. Minimum-dissipation scalar transport model for large-eddy simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Abkar, Mahdi; Bae, Hyun J.; Moin, Parviz

    2016-08-01

    Minimum-dissipation models are a simple alternative to the Smagorinsky-type approaches to parametrize the subfilter turbulent fluxes in large-eddy simulation. A recently derived model of this type for subfilter stress tensor is the anisotropic minimum-dissipation (AMD) model [Rozema et al., Phys. Fluids 27, 085107 (2015), 10.1063/1.4928700], which has many desirable properties. It is more cost effective than the dynamic Smagorinsky model, it appropriately switches off in laminar and transitional flows, and it is consistent with the exact subfilter stress tensor on both isotropic and anisotropic grids. In this study, an extension of this approach to modeling the subfilter scalar flux is proposed. The performance of the AMD model is tested in the simulation of a high-Reynolds-number rough-wall boundary-layer flow with a constant and uniform surface scalar flux. The simulation results obtained from the AMD model show good agreement with well-established empirical correlations and theoretical predictions of the resolved flow statistics. In particular, the AMD model is capable of accurately predicting the expected surface-layer similarity profiles and power spectra for both velocity and scalar concentration.

  4. Rethinking family-centred care for the child and family in hospital.

    PubMed

    Tallon, Mary M; Kendall, Garth E; Snider, Paul D

    2015-05-01

    This paper presents and discusses an alternative model of family-centred care (FCC) that focuses on optimising the health and developmental outcomes of children through the provision of appropriate support to the child's family. The relevance, meaning and effectiveness of FCC have been challenged recently. Studies show that parents in hospital often feel unsupported, judged by hospital staff and uncertain about what care they should give to their child. With no convincing evidence relating FCC to improved child health outcomes, it has been suggested that FCC should be replaced with a new improved model to guide the care of children in hospital. This integrative review discusses theory and evidence-based literature that supports the practice of an alternative model of FCC that is focused on the health and developmental outcomes of children who are seriously ill, rather than the organisational requirements of children's hospitals. Theories and research findings in a wide range of disciplines including epidemiology, psychology, sociology, anthropology and neuroscience were accessed for this discussion. Nursing literature regarding partnership building, communication and FCC was also accessed. This paper discusses the benefits of applying a bioecological model of human development, the family and community resource framework, the concepts of allostatic load and biological embedding, empowerment theory, and the nurse-family partnership model to FCC. While there is no direct evidence showing that the implementation of this alternative model of FCC in the hospital setting improves the health and developmental outcomes of children who are seriously ill, there is a great deal of evidence from community nursing practice that suggests it is very likely to do so. Application of these theoretical concepts to practice has the potential to underpin a theory of nursing that is relevant for all nurses irrespective of the age of those they care for and the settings within which they work. © 2015 John Wiley & Sons Ltd.

  5. A Game-Theoretic Measure of Presence for Assessing Aircraft Carrier Options.

    DTIC Science & Technology

    1982-10-01

    AD-A121 599 A GAME -THEORETIC MEASURE 0F PRESENCE FOR ASSESSIN NO ANALYSES ALEXANDRIA VA PROGRAM ANALYR PRE P S IE O ESE UNCLASSIFED JH GPOTTE ET...1. REPORT MNUM 1 L GOVT ACCESRIM IM. REINT5 CPAALOG NUOGG 14. TITLE Cid ubMsj S. TYPE OF REPORT & PERIOD COVERED A Game -Theoretic Measure of...NOTES C Is. KEY wOrS (0mM. 0a No" f .a..eaup meM RSEI’ p eek ... l6t Game theory, Zero-sum games , Col Blotto games , aircraft carriers, alternative

  6. Resolving the double tension: Toward a new approach to measurement modeling in cross-national research

    NASA Astrophysics Data System (ADS)

    Medina, Tait Runnfeldt

    The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and explicates a link between theoretical conceptions of cross-national comparability and a statistical method. Second, it provides a clear and detailed discussion of model identification in multiple-group confirmatory factor analysis that is missing from the literature. This discussion sets the stage for the introduction of the identification problem within multiple-group confirmatory factor analysis with partial measurement invariance and the alternative approach to model identification employed by the alignment method. Third, it offers the first pedagogical presentation of the alignment method using a sociologically relevant example.

  7. Modelling ADHD: A review of ADHD theories through their predictions for computational models of decision-making and reinforcement learning.

    PubMed

    Ziegler, Sigurd; Pedersen, Mads L; Mowinckel, Athanasia M; Biele, Guido

    2016-12-01

    Attention deficit hyperactivity disorder (ADHD) is characterized by altered decision-making (DM) and reinforcement learning (RL), for which competing theories propose alternative explanations. Computational modelling contributes to understanding DM and RL by integrating behavioural and neurobiological findings, and could elucidate pathogenic mechanisms behind ADHD. This review of neurobiological theories of ADHD describes predictions for the effect of ADHD on DM and RL as described by the drift-diffusion model of DM (DDM) and a basic RL model. Empirical studies employing these models are also reviewed. While theories often agree on how ADHD should be reflected in model parameters, each theory implies a unique combination of predictions. Empirical studies agree with the theories' assumptions of a lowered DDM drift rate in ADHD, while findings are less conclusive for boundary separation. The few studies employing RL models support a lower choice sensitivity in ADHD, but not an altered learning rate. The discussion outlines research areas for further theoretical refinement in the ADHD field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Response to “Comments on ‘A theoretical model of the pressure distributions arising from asymmetric intraglottal flows applied to a two-mass model of the vocal folds’” [J. Acoust. Soc. Am. 130, 389–403 (2011)

    PubMed Central

    Erath, Byron D.; Peterson, Sean D.; Zañartu, Matías; Wodicka, George R.; Stewart, Kelley C.; Plesniak, Michael W.

    2013-01-01

    Hirschberg [J. Acoust. Soc. Am. 134, 9-12 (2013)] presents a commentary and criticisms of the viscous flow model presented by Erath et al. [J. Acoust. Soc. Am. 130, 389–403 (2011)] that solves for the asymmetric pressure loading on the vocal fold walls. This pressure loading arises from asymmetric flow attachment to one vocal fold wall when the glottal channel forms a divergent configuration. Hirschberg proposes an alternative model for the asymmetric loading based upon inviscid flow curvature at the glottal inlet. In this manuscript further evidence is provided in support of the model of Erath et al. and the underlying assumptions, and demonstrates that the primary criticisms presented by Hirschberg are unwarranted. The model presented by Hirschberg is compared with the model from the original paper by Erath et al., and it is shown that each model describes different and complementary aspects of divergent glottal flows. PMID:23927090

  9. Alternative theories: Pregnancy and immune tolerance.

    PubMed

    Bonney, Elizabeth A

    2017-09-01

    For some time, reproductive immunologists have worked to understand the balance between maternal tolerance of the fetus, maternal health, and fetal protection which leads to successful pregnancy in mammalian species. We have always understood the potential importance of multiple factors, including nutrition, genetics, anatomy, hormonal regulation, environmental insult and many others. Yet, we still struggle to combine our knowledge of these factors and immunology to finally understand complex diseases of pregnancy, such as preeclampsia. Data, and potentially other factors (e.g. politics, economics), support the work to fit pregnancy into classical immune theory driven by the concept of self-non-self-discrimination. However, based on data, many classical theorists call pregnancy "a special case." This review is a first-pass suggestion to attempt to view three models of immune system activation and tolerance as potential alternatives to classical self-non-self-discrimination and to propose a theoretical framework to view them in the context of pregnancy. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. An integrated framework for the optimisation of sport and athlete development: a practitioner approach.

    PubMed

    Gulbin, Jason P; Croser, Morag J; Morley, Elissa J; Weissensteiner, Juanita R

    2013-01-01

    This paper introduces a new sport and athlete development framework that has been generated by multidisciplinary sport practitioners. By combining current theoretical research perspectives with extensive empirical observations from one of the world's leading sport agencies, the proposed FTEM (Foundations, Talent, Elite, Mastery) framework offers broad utility to researchers and sporting stakeholders alike. FTEM is unique in comparison with alternative models and frameworks, because it: integrates general and specialised phases of development for participants within the active lifestyle, sport participation and sport excellence pathways; typically doubles the number of developmental phases (n = 10) in order to better understand athlete transition; avoids chronological and training prescriptions; more optimally establishes a continuum between participation and elite; and allows full inclusion of many developmental support drivers at the sport and system levels. The FTEM framework offers a viable and more flexible alternative for those sporting stakeholders interested in managing, optimising, and researching sport and athlete development pathways.

  11. Photonic band gap and defects modes in inorganic/organic photonic crystal based on Si and HMDSO layers deposited by sputtering and PECVD

    NASA Astrophysics Data System (ADS)

    Amri, R.; Sahel, S.; Gamra, D.; Lejeune, M.; Clin, M.; Zellama, K.; Bouchriha, H.

    2018-02-01

    Hybrid inorganic/organic one dimensional photonic crystal based on alternating layers of Si/HMDSO is elaborated. The inorganic silicon is deposited by radiofrequency magnetron sputtering and the organic HMDSO is deposited by PECVD technique. As the Si refractive index is n = 3.4, and the refractive index of HMDSO layer depend on the deposition conditions, to get a photonic crystal with high and low refractive index presenting a good contrast, we have varied the radiofrequency power of PECVD process to obtain HMDSO layer with low refractive index (n = 1.45). Photonic band gap of this hybrid structure is obtained from the transmission and reflection spectra and appears after 9 alternative layers of Si/HMDSO. The introduction of defects in our photonic crystal leads to the emergence of localized modes within the photonic band gap. Our results are interpreted by using a theoretical model based on transfer matrix.

  12. Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions.

    PubMed

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D

    2009-12-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.

  13. A worthy self is a caring self: Examining the developmental relations between self-esteem and self-compassion in adolescents.

    PubMed

    Donald, James N; Ciarrochi, Joseph; Parker, Philip D; Sahdra, Baljinder K; Marshall, Sarah L; Guo, Jiesi

    2017-08-18

    Self-compassion has been framed as a healthy alternative to self-esteem, as it is nonevaluative. However, rather than being alternatives, it may be that the two constructs develop in a mutually reinforcing way. The present study tested this possibility among adolescents. A large adolescent sample (N = 2,809; 49.8% female) reported levels of trait self-esteem and self-compassion annually for 4 years. Autoregressive cross-lagged structural equation models were used to estimate the reciprocal longitudinal relations between the two constructs. Self-esteem consistently predicted changes in self-compassion across the 4 years of the study, but not vice versa. Self-esteem appears to be an important antecedent of the development of self-compassion, perhaps because the capacity to extend compassion toward the self depends on one's appraisals of worthiness. These findings add important insights to our theoretical understanding of the development of self-compassion. © 2017 Wiley Periodicals, Inc.

  14. Plant Uptake of Organic Pollutants from Soil: A Critical Review ofBioconcentration Estimates Based on Modelsand Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKone, Thomas E.; Maddalena, Randy L.

    2007-01-01

    The role of terrestrial vegetation in transferring chemicals from soil and air into specific plant tissues (stems, leaves, roots, etc.) is still not well characterized. We provide here a critical review of plant-to-soil bioconcentration ratio (BCR) estimates based on models and experimental data. This review includes the conceptual and theoretical formulations of the bioconcentration ratio, constructing and calibrating empirical and mathematical algorithms to describe this ratio and the experimental data used to quantify BCRs and calibrate the model performance. We first evaluate the theoretical basis for the BCR concept and BCR models and consider how lack of knowledge and datamore » limits reliability and consistency of BCR estimates. We next consider alternate modeling strategies for BCR. A key focus of this evaluation is the relative contributions to overall uncertainty from model uncertainty versus variability in the experimental data used to develop and test the models. As a case study, we consider a single chemical, hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX), and focus on variability of bioconcentration measurements obtained from 81 experiments with different plant species, different plant tissues, different experimental conditions, and different methods for reporting concentrations in the soil and plant tissues. We use these observations to evaluate both the magnitude of experimental variability in plant bioconcentration and compare this to model uncertainty. Among these 81 measurements, the variation of the plant/soil BCR has a geometric standard deviation (GSD) of 3.5 and a coefficient of variability (CV-ratio of arithmetic standard deviation to mean) of 1.7. These variations are significant but low relative to model uncertainties--which have an estimated GSD of 10 with a corresponding CV of 14.« less

  15. Doubly robust nonparametric inference on the average treatment effect.

    PubMed

    Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B

    2017-12-01

    Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.

  16. A theoretical model to explain the smart technology adoption behaviors of elder consumers (Elderadopt).

    PubMed

    Golant, Stephen M

    2017-08-01

    A growing global population of older adults is potential consumers of a category of products referred to as smart technologies, but also known as telehealth, telecare, information and communication technologies, robotics, and gerontechnology. This paper constructs a theoretical model to explain whether older people will adopt smart technology options to cope with their discrepant individual or environmental circumstances, thereby enabling them to age in place. Its proposed constructs and relationships are drawn from multiple academic disciplines and professional specialties, and an extensive literature focused on the factors influencing the acceptance of these smart technologies. It specifically examines whether older adults will substitute these new technologies for traditional coping solutions that rely on informal and formal care assistance and low technology related products. The model argues that older people will more positively evaluate smart technology alternatives when they feel more stressed because of their unmet needs, have greater resilience (stronger perceptions of self-efficacy and greater openness to new information), and are more strongly persuaded by their sources of outside messaging (external information) and their past experiences (internal information). It proposes that older people distinguish three attributes of these coping options when they appraise them: perceived efficaciousness, perceived usability, and perceived collateral damages. The more positively older people evaluate these attributes, the more likely that they will adopt these smart technology products. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Validation of a metabolic network for Saccharomyces cerevisiae using mixed substrate studies.

    PubMed

    Vanrolleghem, P A; de Jong-Gubbels, P; van Gulik, W M; Pronk, J T; van Dijken, J P; Heijnen, S

    1996-01-01

    Setting up a metabolic network model for respiratory growth of Saccharomyces cerevisiae requires the estimation of only two (energetic) stoichiometric parameters: (1) the operational PO ratio and (2) a growth-related maintenance factor k. It is shown, both theoretically and practically, how chemostat cultivations with different mixtures of two substrates allow unique values to be given to these unknowns of the proposed metabolic model. For the yeast and model considered, an effective PO ratio of 1.09 mol of ATP/mol of O (95% confidence interval 1.07-1.11) and a k factor of 0.415 mol of ATP/C-mol of biomass (0.385-0.445) were obtained from biomass substrate yield data on glucose/ethanol mixtures. Symbolic manipulation software proved very valuable in this study as it supported the proof of theoretical identifiability and significantly reduced the necessary computations for parameter estimation. In the transition from 100% glucose to 100% ethanol in the feed, four metabolic regimes occur. Switching between these regimes is determined by cessation of an irreversible reaction and initiation of an alternative reaction. Metabolic network predictions of these metabolic switches compared well with activity measurements of key enzymes. As a second validation of the network, the biomass yield of S. cerevisiae on acetate was also compared to the network prediction. An excellent agreement was found for a network in which acetate transport was modeled with a proton symport, while passive diffusion of acetate gave significantly higher yield predictions.

  18. Theoretical modeling of yields for proton-induced reactions on natural and enriched molybdenum targets.

    PubMed

    Celler, A; Hou, X; Bénard, F; Ruth, T

    2011-09-07

    Recent acute shortage of medical radioisotopes prompted investigations into alternative methods of production and the use of a cyclotron and ¹⁰⁰Mo(p,2n)(99m)Tc reaction has been considered. In this context, the production yields of (99m)Tc and various other radioactive and stable isotopes which will be created in the process have to be investigated, as these may affect the diagnostic outcome and radiation dosimetry in human studies. Reaction conditions (beam and target characteristics, and irradiation and cooling times) need to be optimized in order to maximize the amount of (99m)Tc and minimize impurities. Although ultimately careful experimental verification of these conditions must be performed, theoretical calculations can provide the initial guidance allowing for extensive investigations at little cost. We report the results of theoretically determined reaction yields for (99m)Tc and other radioactive isotopes created when natural and enriched molybdenum targets are irradiated by protons. The cross-section calculations were performed using a computer program EMPIRE for the proton energy range 6-30 MeV. A computer graphical user interface for automatic calculation of production yields taking into account various reaction channels leading to the same final product has been created. The proposed approach allows us to theoretically estimate the amount of (99m)Tc and its ratio relative to (99g)Tc and other radioisotopes which must be considered reaction contaminants, potentially contributing to additional patient dose in diagnostic studies.

  19. The Ethics of Human Freedom and Healthcare Policy: A Nursing Theoretical Perspective.

    PubMed

    Milton, Constance L

    2015-07-01

    Global healthcare and healthcare policies are evolving with change at a swift pace. Inherent in the discussions of a person's right to choose health is the notion of freedom. The author in this column compares and contrasts bioethical views of freedom and autonomy with alternative views and possibilities by examining an ethic of freedom grounded from a different paradigm, the humanbecoming nursing theoretical perspective. © The Author(s) 2015.

  20. Experimental and Theoretical Study of 4H-SiC JFET Threshold Voltage Body Bias Effect from 25 C to 500 C

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.; Spry, David J.; Chen, Liangyu

    2015-01-01

    This work reports a theoretical and experimental study of 4H-SiC JFET threshold voltage as a function of substrate body bias, device position on the wafer, and temperature from 25 C (298K) to 500 C (773K). Based on these results, an alternative approach to SPICE circuit simulation of body effect for SiC JFETs is proposed.

  1. Effective model approach to the dense state of QCD matter

    NASA Astrophysics Data System (ADS)

    Fukushima, Kenji

    2011-12-01

    The first-principle approach to the dense state of QCD matter, i.e. the lattice-QCD simulation at finite baryon density, is not under theoretical control for the moment. The effective model study based on QCD symmetries is a practical alternative. However the model parameters that are fixed by hadronic properties in the vacuum may have unknown dependence on the baryon chemical potential. We propose a new prescription to constrain the effective model parameters by the matching condition with the thermal Statistical Model. In the transitional region where thermal quantities blow up in the Statistical Model, deconfined quarks and gluons should smoothly take over the relevant degrees of freedom from hadrons and resonances. We use the Polyakov-loop coupled Nambu-Jona-Lasinio (PNJL) model as an effective description in the quark side and show how the matching condition is satisfied by a simple ansäatz on the Polyakov loop potential. Our results favor a phase diagram with the chiral phase transition located at slightly higher temperature than deconfinement which stays close to the chemical freeze-out points.

  2. Theoretical and experimental studies on alpha/epsilon-hybrid peptides: design of a 14/12-helix from peptides with alternating (S)-C-linked carbo-epsilon-amino acid [(S)-epsilon-Caa((x))] and L-ala.

    PubMed

    Sharma, Gangavaram V M; Babu, Bommagani Shoban; Chatterjee, Deepak; Ramakrishna, Kallaganti V S; Kunwar, Ajit C; Schramm, Peter; Hofmann, Hans-Jörg

    2009-09-04

    An (S)-C-linked carbo-epsilon-amino acid [(S)-epsilon-Caa((x))] was prepared from the known (S)-delta-Caa. This monomer was utilized together with l-Ala to give novel alpha/epsilon-hybrid peptides in 1:1 alternation. Conformational analysis on penta- and hexapeptides by NMR (in CDCl(3)), CD, and MD studies led to the identification of robust 14/12-mixed helices. This is in agreement with the data from a theoretical conformational analysis on the basis of ab initio MO theory providing a complete overview on all formally possible hydrogen-bonded helix patterns of alpha/epsilon-hybrid peptides with 1:1 backbone alternation. The "new motif" of a mixed 14/12-helix was predicted as most stable in vacuum. Obviously, the formation of ordered secondary structures is also possible in peptide foldamers with amino acid constituents of considerable backbone lengths. Thus, alpha/epsilon-hybrid peptides expand the domain of foldamers and allow the introduction of desired functionalities via the alpha-amino acid constituents.

  3. A diffusion modeling approach to understanding contextual cueing effects in children with ADHD.

    PubMed

    Weigard, Alexander; Huang-Pollock, Cynthia

    2014-12-01

    Strong theoretical models suggest implicit learning deficits may exist among children with Attention Deficit Hyperactivity Disorder (ADHD). We examine implicit contextual cueing (CC) effects among children with ADHD (n = 72) and non-ADHD Controls (n = 36). Using Ratcliff's drift diffusion model, we found that among Controls, the CC effect is due to improvements in attentional guidance and to reductions in response threshold. Children with ADHD did not show a CC effect; although they were able to use implicitly acquired information to deploy attentional focus, they had more difficulty adjusting their response thresholds. Improvements in attentional guidance and reductions in response threshold together underlie the CC effect. Results are consistent with neurocognitive models of ADHD that posit subcortical dysfunction but intact spatial attention, and encourage the use of alternative data analytic methods when dealing with reaction time data. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.

  4. Modeling the Capacitive Deionization Process in Dual-Porosity Electrodes

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2016-04-28

    In many areas of the world, there is a need to increase water availability. Capacitive deionization (CDI) is an electrochemical water treatment process that can be a viable alternative for treating water and for saving energy. A model is presented to simulate the CDI process in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two steps volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. A one-equationmore » model based on the principle of local equilibrium is derived. The constraints determining the range of application of the one-equation model are presented. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. The source terms that appear in the average equations are calculated using theoretical derivations. The global diffusivity is calculated by solving the closure problem.« less

  5. Towards Automated Bargaining in Electronic Markets: A Partially Two-Sided Competition Model

    NASA Astrophysics Data System (ADS)

    Gatti, Nicola; Lazaric, Alessandro; Restelli, Marcello

    This paper focuses on the prominent issue of automating bargaining agents within electronic markets. Models of bargaining in literature deal with settings wherein there are only two agents and no model satisfactorily captures settings in which there is competition among buyers, being they more than one, and analogously among sellers. In this paper, we extend the principal bargaining protocol, i.e. the alternating-offers protocol, to capture bargaining in markets. The model we propose is such that, in presence of a unique buyer and a unique seller, agents' equilibrium strategies are those in the original protocol. Moreover, we game theoretically study the considered game providing the following results: in presence of one-sided competition (more buyers and one seller or vice versa) we provide agents' equilibrium strategies for all the values of the parameters, in presence of two-sided competition (more buyers and more sellers) we provide an algorithm that produce agents' equilibrium strategies for a large set of the parameters and we experimentally evaluate its effectiveness.

  6. Interpretation of Trace Gas Data Using Inverse Methods and Global Chemical Transport Models

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    1997-01-01

    This is a theoretical research project aimed at: (1) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (2) utilization of inverse methods to determine these source/sink strengths which use the NCAR/Boulder CCM2-T42 3-D model and a global 3-D Model for Atmospheric Transport and Chemistry (MATCH) which is based on analyzed observed wind fields (developed in collaboration by MIT and NCAR/Boulder), (3) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and, (4) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3-D models. Important goals include determination of regional source strengths of methane, nitrous oxide, and other climatically and chemically important biogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements and hydrohalocarbons used as alternatives to the restricted halocarbons.

  7. Analysis of simplified heat transfer models for thermal property determination of nano-film by TDTR method

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Chen, Zhe; Sun, Fangyuan; Zhang, Hang; Jiang, Yuyan; Tang, Dawei

    2018-03-01

    Heat transfer in nanostructures is of critical importance for a wide range of applications such as functional materials and thermal management of electronics. Time-domain thermoreflectance (TDTR) has been proved to be a reliable measurement technique for the thermal property determinations of nanoscale structures. However, it is difficult to determine more than three thermal properties at the same time. Heat transfer model simplifications can reduce the fitting variables and provide an alternative way for thermal property determination. In this paper, two simplified models are investigated and analyzed by the transform matrix method and simulations. TDTR measurements are performed on Al-SiO2-Si samples with different SiO2 thickness. Both theoretical and experimental results show that the simplified tri-layer model (STM) is reliable and suitable for thin film samples with a wide range of thickness. Furthermore, the STM can also extract the intrinsic thermal conductivity and interfacial thermal resistance from serial samples with different thickness.

  8. Predicting the threshold of pulse-train electrical stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    PubMed

    Xu, Yifang; Collins, Leslie M

    2004-04-01

    The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.

  9. Information-theoretic measures of hydrogen-like ions in weakly coupled Debye plasmas

    NASA Astrophysics Data System (ADS)

    Zan, Li Rong; Jiao, Li Guang; Ma, Jia; Ho, Yew Kam

    2017-12-01

    Recent development of information theory provides researchers an alternative and useful tool to quantitatively investigate the variation of the electronic structure when atoms interact with the external environment. In this work, we make systematic studies on the information-theoretic measures for hydrogen-like ions immersed in weakly coupled plasmas modeled by Debye-Hückel potential. Shannon entropy, Fisher information, and Fisher-Shannon complexity in both position and momentum spaces are quantified in high accuracy for the hydrogen atom in a large number of stationary states. The plasma screening effect on embedded atoms can significantly affect the electronic density distributions, in both conjugate spaces, and it is quantified by the variation of information quantities. It is shown that the composite quantities (the Shannon entropy sum and the Fisher information product in combined spaces and Fisher-Shannon complexity in individual space) give a more comprehensive description of the atomic structure information than single ones. The nodes of wave functions play a significant role in the changes of composite information quantities caused by plasmas. With the continuously increasing screening strength, all composite quantities in circular states increase monotonously, while in higher-lying excited states where nodal structures exist, they first decrease to a minimum and then increase rapidly before the bound state approaches the continuum limit. The minimum represents the most reduction of uncertainty properties of the atom in plasmas. The lower bounds for the uncertainty product of the system based on composite information quantities are discussed. Our research presents a comprehensive survey in the investigation of information-theoretic measures for simple atoms embedded in Debye model plasmas.

  10. Numerical modelling in friction lap joining of aluminium alloy and carbon-fiber-reinforced-plastic sheets

    NASA Astrophysics Data System (ADS)

    Das, A.; Bang, H. S.; Bang, H. S.

    2018-05-01

    Multi-material combinations of aluminium alloy and carbon-fiber-reinforced-plastics (CFRP) have gained attention in automotive and aerospace industries to enhance fuel efficiency and strength-to-weight ratio of components. Various limitations of laser beam welding, adhesive bonding and mechanical fasteners make these processes inefficient to join metal and CFRP sheets. Friction lap joining is an alternative choice for the same. Comprehensive studies in friction lap joining of aluminium to CFRP sheets are essential and scare in the literature. The present work reports a combined theoretical and experimental study in joining of AA5052 and CFRP sheets using friction lap joining process. A three-dimensional finite element based heat transfer model is developed to compute the temperature fields and thermal cycles. The computed results are validated extensively with the corresponding experimentally measured results.

  11. Teaching Through Interactions in Secondary School Classrooms: Revisiting the Factor Structure and Practical Application of the Classroom Assessment Scoring System–Secondary

    PubMed Central

    Hafen, Christopher A.; Hamre, Bridget K.; Allen, Joseph P.; Bell, Courtney A.; Gitomer, Drew H.; Pianta, Robert C.

    2017-01-01

    Valid measurement of how students’ experiences in secondary school classrooms lead to gains in learning requires a developmental approach to conceptualizing classroom processes. This article presents a potentially useful theoretical model, the Teaching Through Interactions framework, which posits teacher-student interactions as a central driver for student learning and that teacher-student interactions can be organized into three major domains. Results from 1,482 classrooms provide evidence for distinct emotional, organizational, and instructional domains of teacher-student interaction. It also appears that a three-factor structure is a better fit to observational data than alternative one- and two-domain models of teacher-student classroom interactions, and that the three-domain structure is generalizable from 6th through 12th grade. Implications for practitioners, stakeholders, and researchers are discussed. PMID:28232770

  12. Measurement of Muon Antineutrino Quasielastic Scattering on a Hydrocarbon Target at E ν~3.5 GeV

    DOE PAGES

    Fields, L.; Chvojka, J.; Aliaga, L.; ...

    2013-07-11

    We have isolated ν¯ μ charged-current quasielastic (QE) interactions occurring in the segmented scintillator tracking region of the MINERvA detector running in the NuMI neutrino beam at Fermilab. We measure the flux-averaged differential cross section, dσ/dQ², and compare to several theoretical models of QE scattering. Good agreement is obtained with a model where the nucleon axial mass, M A, is set to 0.99 GeV/c² but the nucleon vector form factors are modified to account for the observed enhancement, relative to the free nucleon case, of the cross section for the exchange of transversely polarized photons in electron-nucleus scattering. Our datamore » at higher Q² favor this interpretation over an alternative in which the axial mass is increased.« less

  13. Conflicts of interest improve collective computation of adaptive social structures

    PubMed Central

    Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.

    2018-01-01

    In many biological systems, the functional behavior of a group is collectively computed by the system’s individual components. An example is the brain’s ability to make decisions via the activity of billions of neurons. A long-standing puzzle is how the components’ decisions combine to produce beneficial group-level outputs, despite conflicts of interest and imperfect information. We derive a theoretical model of collective computation from mechanistic first principles, using results from previous work on the computation of power structure in a primate model system. Collective computation has two phases: an information accumulation phase, in which (in this study) pairs of individuals gather information about their fighting abilities and make decisions about their dominance relationships, and an information aggregation phase, in which these decisions are combined to produce a collective computation. To model information accumulation, we extend a stochastic decision-making model—the leaky integrator model used to study neural decision-making—to a multiagent game-theoretic framework. We then test alternative algorithms for aggregating information—in this study, decisions about dominance resulting from the stochastic model—and measure the mutual information between the resultant power structure and the “true” fighting abilities. We find that conflicts of interest can improve accuracy to the benefit of all agents. We also find that the computation can be tuned to produce different power structures by changing the cost of waiting for a decision. The successful application of a similar stochastic decision-making model in neural and social contexts suggests general principles of collective computation across substrates and scales. PMID:29376116

  14. Evaluation of Alternative Conceptual Models Using Interdisciplinary Information: An Application in Shallow Groundwater Recharge and Discharge

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Bajcsy, P.; Valocchi, A. J.; Kim, C.; Wang, J.

    2007-12-01

    Natural systems are complex, thus extensive data are needed for their characterization. However, data acquisition is expensive; consequently we develop models using sparse, uncertain information. When all uncertainties in the system are considered, the number of alternative conceptual models is large. Traditionally, the development of a conceptual model has relied on subjective professional judgment. Good judgment is based on experience in coordinating and understanding auxiliary information which is correlated to the model but difficult to be quantified into the mathematical model. For example, groundwater recharge and discharge (R&D) processes are known to relate to multiple information sources such as soil type, river and lake location, irrigation patterns and land use. Although hydrologists have been trying to understand and model the interaction between each of these information sources and R&D processes, it is extremely difficult to quantify their correlations using a universal approach due to the complexity of the processes, the spatiotemporal distribution and uncertainty. There is currently no single method capable of estimating R&D rates and patterns for all practical applications. Chamberlin (1890) recommended use of "multiple working hypotheses" (alternative conceptual models) for rapid advancement in understanding of applied and theoretical problems. Therefore, cross analyzing R&D rates and patterns from various estimation methods and related field information will likely be superior to using only a single estimation method. We have developed the Pattern Recognition Utility (PRU), to help GIS users recognize spatial patterns from noisy 2D image. This GIS plug-in utility has been applied to help hydrogeologists establish alternative R&D conceptual models in a more efficient way than conventional methods. The PRU uses numerical methods and image processing algorithms to estimate and visualize shallow R&D patterns and rates. It can provide a fast initial estimate prior to planning labor intensive and time consuming field R&D measurements. Furthermore, the Spatial Pattern 2 Learn (SP2L) was developed to cross analyze results from the PRU with ancillary field information, such as land coverage, soil type, topographic maps and previous estimates. The learning process of SP2L cross examines each initially recognized R&D pattern with the ancillary spatial dataset, and then calculates a quantifiable reliability index for each R&D map using a supervised machine learning technique called decision tree. This JAVA based software package is capable of generating alternative R&D maps if the user decides to apply certain conditions recognized by the learning process. The reliability indices from SP2L will improve the traditionally subjective approach to initiating conceptual models by providing objectively quantifiable conceptual bases for further probabilistic and uncertainty analyses. Both the PRU and SP2L have been designed to be user-friendly and universal utilities for pattern recognition and learning to improve model predictions from sparse measurements by computer-assisted integration of spatially dense geospatial image data and machine learning of model dependencies.

  15. White zein colloidal particles: synthesis and characterization of their optical properties on the single particle level and in concentrated suspensions.

    PubMed

    de Boer, F Y; Kok, R N U; Imhof, A; Velikov, K P

    2018-04-18

    Growing interest in using natural, biodegradable ingredients for food products leads to an increase in research for alternative sources of functional ingredients. One alternative is zein, a water-insoluble protein from corn. Here, a method to investigate the optical properties of white zein colloidal particles is presented in both diluted and concentrated suspensions. The particles are synthesized, after purification of zein, by anti-solvent precipitation. Mean particle diameters ranged from 35 to 135 nm based on dynamic light scattering. The value of these particles as white colorant is examined by measuring their optical properties. Dilute suspensions are prepared to measure the extinction cross section of individual particles and this was combined with Mie theory to determine a refractive index (RI) of 1.49 ± 0.01 for zein particles dispersed in water. This value is used to further model the optical properties of concentrated suspensions. To obtain full opacity of the suspension, comparable to 0.1-0.2 wt% suspensions of TiO2, concentrations of 2 to 3.3 wt% of zein particles are sufficient. The optimal size for maximal scattering efficiency is explored by modeling dilute and concentrated samples with RI's matching those of zein and TiO2 particles in water. The transport mean free path of light was determined experimentally and theoretically and the agreement between the transport mean free path calculated from the model and the measured value is better than 30%. Such particles have the potential to be an all-natural edible alternative for TiO2 as white colorant in wet food products.

  16. Sequence information gain based motif analysis.

    PubMed

    Maynou, Joan; Pairó, Erola; Marco, Santiago; Perera, Alexandre

    2015-11-09

    The detection of regulatory regions in candidate sequences is essential for the understanding of the regulation of a particular gene and the mechanisms involved. This paper proposes a novel methodology based on information theoretic metrics for finding regulatory sequences in promoter regions. This methodology (SIGMA) has been tested on genomic sequence data for Homo sapiens and Mus musculus. SIGMA has been compared with different publicly available alternatives for motif detection, such as MEME/MAST, Biostrings (Bioconductor package), MotifRegressor, and previous work such Qresiduals projections or information theoretic based detectors. Comparative results, in the form of Receiver Operating Characteristic curves, show how, in 70% of the studied Transcription Factor Binding Sites, the SIGMA detector has a better performance and behaves more robustly than the methods compared, while having a similar computational time. The performance of SIGMA can be explained by its parametric simplicity in the modelling of the non-linear co-variability in the binding motif positions. Sequence Information Gain based Motif Analysis is a generalisation of a non-linear model of the cis-regulatory sequences detection based on Information Theory. This generalisation allows us to detect transcription factor binding sites with maximum performance disregarding the covariability observed in the positions of the training set of sequences. SIGMA is freely available to the public at http://b2slab.upc.edu.

  17. A theoretical study on directivity control of multiple-loudspeaker system with a quadrupole radiation pattern in low frequency range

    NASA Astrophysics Data System (ADS)

    Irwansyah, Kuse, Naoyuki; Usagawa, Tsuyoshi

    2017-08-01

    Directivity pattern of an ordinary loudspeaker becomes more directive at higher frequencies. However, because a single loudspeaker tends to radiate uniformly in all directions at low frequencies, reverberation from surrounding building walls may affect speech intelligibility when installing a multiple-loudspeaker system at crossroads. As an alternative, a sharply directive sound source is recommended to be used, but in many cases the directivity of an ordinary loudspeaker is less sharp at lower frequencies. Therefore, in order to overcome such a limitation, this paper discusses the possibility of using four loudspeakers under active control to realize a quadrupole radiation pattern in low frequency range. In this study, the radiation pattern of a primary loudspeaker and three secondary loudspeakers has been modelled. By placing the loudspeakers close together in the direction of 0°, 90°, 180°, and 270°, it was theoretically demonstrated that a quadrupole radiation pattern can be shaped in the target frequency range up to 600 Hz by simply controlling the directivity in three of four directions which are 45°, 135°, 225°, and 315°. Although, the radiation pattern model is far from realistic configurations and conditions, it is possible to realize a quadrupole radiation pattern in the low frequency range.

  18. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  19. The Nature of Science Instrument-Elementary (NOSI-E): Using Rasch principles to develop a theoretically grounded scale to measure elementary student understanding of the nature of science

    NASA Astrophysics Data System (ADS)

    Peoples, Shelagh

    The purpose of this study was to determine which of three competing models will provide, reliable, interpretable, and responsive measures of elementary students' understanding of the nature of science (NOS). The Nature of Science Instrument-Elementary (NOSI-E), a 28-item Rasch-based instrument, was used to assess students' NOS understanding. The NOS construct was conceptualized using five construct dimensions (Empirical, Inventive, Theory-laden, Certainty and Socially & Culturally Embedded). The competing models represent three internal models for the NOS construct. One postulate is that the NOS construct is unidimensional where one latent construct explains the relationship between the 28 items of the NOSI-E. Alternatively, the NOS construct is composed of five independent unidimensional constructs (the consecutive approach). Lastly, the NOS construct is multidimensional and composed of five inter-related but separate dimensions. A validity argument was developed that hypothesized that the internal structure of the NOS construct is best represented by the multidimensional Rasch model. Four sets of analyses were performed in which the three representations were compared. These analyses addressed five validity aspects (content, substantive, generalizability, structural and external) of construct validity. The vast body of evidence supported the claim that the NOS construct is composed of five separate but inter-related dimensions that is best represented by the multidimensional Rasch model. The results of the multidimensional analyses indicated that the items of the five subscales were of excellent technical quality, exhibited no differential item functioning (based on gender), had an item hierarchy that conformed to theoretical expectations; and together formed subscales of reasonable reliability (> 0.7 on each subscale) that were responsive to change in the construct. Theory-laden scores from the multidimensional model predicted students' science achievement with scores from all five NOS dimensions significantly predicting students' perceptions of the constructivist nature of their classroom learning environment. The NOSI-E instrument is a theoretically grounded scale that can measure elementary students' NOS understanding and appears suitable for use in science education research.

  20. A critical review of classical bouncing cosmologies

    NASA Astrophysics Data System (ADS)

    Battefeld, Diana; Peter, Patrick

    2015-04-01

    Given the proliferation of bouncing models in recent years, we gather and critically assess these proposals in a comprehensive review. The PLANCK data shows an unmistakably red, quasi scale-invariant, purely adiabatic primordial power spectrum and no primary non-Gaussianities. While these observations are consistent with inflationary predictions, bouncing cosmologies aspire to provide an alternative framework to explain them. Such models face many problems, both of the purely theoretical kind, such as the necessity of violating the NEC and instabilities, and at the cosmological application level, as exemplified by the possible presence of shear. We provide a pedagogical introduction to these problems and also assess the fitness of different proposals with respect to the data. For example, many models predict a slightly blue spectrum and must be fine-tuned to generate a red spectral index; as a side effect, large non-Gaussianities often result. We highlight several promising attempts to violate the NEC without introducing dangerous instabilities at the classical and/or quantum level. If primordial gravitational waves are observed, certain bouncing cosmologies, such as the cyclic scenario, are in trouble, while others remain valid. We conclude that, while most bouncing cosmologies are far from providing an alternative to the inflationary paradigm, a handful of interesting proposals have surfaced, which warrant further research. The constraints and lessons learned as laid out in this review might guide future research.

  1. Dynamics of epidemics outbreaks in heterogeneous populations

    NASA Astrophysics Data System (ADS)

    Brockmann, Dirk; Morales-Gallardo, Alejandro; Geisel, Theo

    2007-03-01

    The dynamics of epidemic outbreaks have been investigated in recent years within two alternative theoretical paradigms. The key parameter of mean field type of models such as the SIR model is the basic reproduction number R0, the average number of secondary infections caused by one infected individual. Recently, scale free network models have received much attention as they account for the high variability in the number of social contacts involved. These models predict an infinite basic reproduction number in some cases. We investigate the impact of heterogeneities of contact rates in a generic model for epidemic outbreaks. We present a system in which both the time periods of being infectious and the time periods between transmissions are Poissonian processes. The heterogeneities are introduced by means of strongly variable contact rates. In contrast to scale free network models we observe a finite basic reproduction number and, counterintuitively a smaller overall epidemic outbreak as compared to the homogeneous system. Our study thus reveals that heterogeneities in contact rates do not necessarily facilitate the spread to infectious disease but may well attenuate it.

  2. Group percolation in interdependent networks

    NASA Astrophysics Data System (ADS)

    Wang, Zexun; Zhou, Dong; Hu, Yanqing

    2018-03-01

    In many real network systems, nodes usually cooperate with each other and form groups to enhance their robustness to risks. This motivates us to study an alternative type of percolation, group percolation, in interdependent networks under attack. In this model, nodes belonging to the same group survive or fail together. We develop a theoretical framework for this group percolation and find that the formation of groups can improve the resilience of interdependent networks significantly. However, the percolation transition is always of first order, regardless of the distribution of group sizes. As an application, we map the interdependent networks with intersimilarity structures, which have attracted much attention recently, onto the group percolation and confirm the nonexistence of continuous phase transitions.

  3. How Mean is the Mean?

    PubMed Central

    Speelman, Craig P.; McGann, Marek

    2013-01-01

    In this paper we voice concerns about the uncritical manner in which the mean is often used as a summary statistic in psychological research. We identify a number of implicit assumptions underlying the use of the mean and argue that the fragility of these assumptions should be more carefully considered. We examine some of the ways in which the potential violation of these assumptions can lead us into significant theoretical and methodological error. Illustrations of alternative models of research already extant within Psychology are used to explore methods of research less mean-dependent and suggest that a critical assessment of the assumptions underlying its use in research play a more explicit role in the process of study design and review. PMID:23888147

  4. Network coding multiuser scheme for indoor visible light communications

    NASA Astrophysics Data System (ADS)

    Zhang, Jiankun; Dang, Anhong

    2017-12-01

    Visible light communication (VLC) is a unique alternative for indoor data transfer and developing beyond point-to-point. However, for realizing high-capacity networks, VLC is facing challenges including the constrained bandwidth of the optical access point and random occlusion. A network coding scheme for VLC (NC-VLC) is proposed, with increased throughput and system robustness. Based on the Lambertian illumination model, theoretical decoding failure probability of the multiuser NC-VLC system is derived, and the impact of the system parameters on the performance is analyzed. Experiments demonstrate the proposed scheme successfully in the indoor multiuser scenario. These results indicate that the NC-VLC system shows a good performance under the link loss and random occlusion.

  5. A novel boundary layer sensor utilizing domain switching in ferroelectric liquid crystals

    NASA Technical Reports Server (NTRS)

    Parmar, D. S.

    1991-01-01

    This paper describes the design and the principles of operation of a novel sensor for the optical detection of a shear stress field induced by air or gas flow on a rigid surface. The detection relies on the effects of shear-induced optical switching in ferroelectric liquid crystals. It is shown that the method overcomes many of the limitations of similar measuring techniques including those using cholesteric liquid crystals. The present method offers a preferred alternative for flow visualization and skin friction measurements in wind-tunnel experiments on laminar boundary layer transition investigations. A theoretical model for the optical response to shear stress is presented together with a schematic diagram of the experimental setup.

  6. Scaling laws for AC gas breakdown and implications for universality

    NASA Astrophysics Data System (ADS)

    Loveless, Amanda M.; Garner, Allen L.

    2017-10-01

    The reduced dependence on secondary electron emission and electrode surface properties makes radiofrequency (RF) and microwave (MW) plasmas advantageous over direct current (DC) plasmas for various applications, such as microthrusters. Theoretical models relating molecular constants to alternating current (AC) breakdown often fail due to incomplete understanding of both the constants and the mechanisms involved. This work derives simple analytic expressions for RF and MW breakdown, demonstrating the transition between these regimes at their high and low frequency limits, respectively. We further show that the limiting expressions for DC, RF, and MW breakdown voltage all have the same universal scaling dependence on pressure and gap distance at high pressure, agreeing with experiment.

  7. Theoretical analysis of the transition-state spectrum of the cyclooctatetraene unimolecular reaction: Three degree-of-freedom model calculations

    NASA Astrophysics Data System (ADS)

    Yoshida, Takahiko; Tokizaki, Chihiro; Takayanagi, Toshiyuki

    2015-08-01

    A three degree-of-freedom potential energy surface of the cyclooctatetraene (COT) unimolecular reaction that can describe both ring-inversion (D2d ↔ D2d) and double bond-alternation (D4h ↔ D4h) processes was constructed using complete active space self-consistent field calculations. The potential energy surface was used to simulate the experimentally measured transition-state spectrum by calculating the photodetachment spectrum of the COT anion with time-dependent wave packet formalism. The calculated spectrum reproduces the experimental result well. We also analyzed wavefunction properties at spectral peak positions to understand the COT unimolecular reaction dynamics.

  8. In defence of inclusive fitness theory.

    PubMed

    Herre, Edward Allen; Wcislo, William T

    2011-03-24

    Arising from M. A. Nowak, C. E. Tarnita & E. O. Wilson 466, 1057-1062 (2010); Nowak et al. reply. Arguably the defining characteristic of the scientific process is its capacity for self-criticism and correction. Nowak et al. challenge proposed connections between relatedness and the evolution of eusociality, suggest instead that defensible nests and "spring-loaded" traits are key, and present alternative modelling approaches. They then dismiss the utility of Hamilton's insight that relatedness has a profound evolutionary effect, formalized in his widely accepted inclusive fitness theory as Hamilton's rule ("Rise and fall of inclusive fitness theory"). However, we believe that Nowak et al. fail to make their case for logical, theoretical and empirical reasons.

  9. Anomalous vibrational modes in acetanilide: A F. D. S. incoherent inelastic neutron scattering study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barthes, M.; Moret, J.; Eckert, J.

    1991-01-01

    The origin of the anomalous infra-red and Raman modes in acetanilide (C{sub 6}H{sub 5}NHCOCH{sub 3}, or ACN), remains a subject of considerable controversy. One family of theoretical models involves Davydov-like solitons nonlinear vibrational coupling, or polaronic'' localized modes. An alternative interpretation of the extra-bands in terms of a Fermi resonance was proposed and recently the existence of slightly non-degenerate hydrogen atom configurations in the H-bond was suggested as an explanation for the anomalies. In this paper we report some new results on the anomalous vibrational modes in ACN that were obtained by inelastic incoherent neutron scattering (INS).

  10. Alternative mechanisms alter the emergent properties of self-organization in mussel beds

    PubMed Central

    Liu, Quan-Xing; Weerman, Ellen J.; Herman, Peter M. J.; Olff, Han; van de Koppel, Johan

    2012-01-01

    Theoretical models predict that spatial self-organization can have important, unexpected implications by affecting the functioning of ecosystems in terms of resilience and productivity. Whether and how these emergent effects depend on specific formulations of the underlying mechanisms are questions that are often ignored. Here, we compare two alternative models of regular spatial pattern formation in mussel beds that have different mechanistic descriptions of the facilitative interactions between mussels. The first mechanism involves a reduced mussel loss rate at high density owing to mutual protection between the mussels, which is the basis of prior studies on the pattern formation in mussels. The second mechanism assumes, based on novel experimental evidence, that mussels feed more efficiently on top of mussel-generated hummocks. Model simulations point out that the second mechanism produces very similar types of spatial patterns in mussel beds. Yet the mechanisms predict a strikingly contrasting effect of these spatial patterns on ecosystem functioning, in terms of productivity and resilience. In the first model, where high mussel densities reduce mussel loss rates, patterns are predicted to strongly increase productivity and decrease the recovery time of the bed following a disturbance. When pattern formation is generated by increased feeding efficiency on hummocks, only minor emergent effects of pattern formation on ecosystem functioning are predicted. Our results provide a warning against predictions of the implications and emergent properties of spatial self-organization, when the mechanisms that underlie self-organization are incompletely understood and not based on the experimental study. PMID:22418256

  11. Organizational strategy, structure, and process.

    PubMed

    Miles, R E; Snow, C C; Meyer, A D; Coleman, H J

    1978-07-01

    Organizational adaptation is a topic that has received only limited and fragmented theoretical treatment. Any attempt to examine organizational adaptation is difficult, since the process is highly complex and changeable. The proposed theoretical framework deals with alternative ways in which organizations define their product-market domains (strategy) and construct mechanisms (structures and processes) to pursue these strategies. The framework is based on interpretation of existing literature and continuing studies in four industries (college textbook publishing, electronics, food processing, and health care).

  12. The theoretical tools of experimental gravitation

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1972-01-01

    Theoretical frameworks for testing relativistic gravity are presented in terms of a system for analyzing theories of gravity invented as alternatives to Einstein. The parametrized post-Newtonian (PPN) formalism, based on the Dicke framework and the Eotvos-Dicke-Braginsky experiment, is discussed in detail. The metric theories of gravity, and their post-Newtonian limits are reviewed, and PPN equations of motion are derived. These equations are used to analyze specific effects and experimental tests in the solar system.

  13. Delay discounting of food by rhesus monkeys: Cocaine and food choice in isomorphic and allomorphic situations.

    PubMed

    Huskinson, Sally L; Woolverton, William L; Green, Leonard; Myerson, Joel; Freeman, Kevin B

    2015-06-01

    Research on delay discounting has focused largely on nondrug reinforcers in an isomorphic context in which choice is between alternatives that involve the same type of reinforcer. Less often, delay discounting has been studied with drug reinforcers in a more ecologically valid allomorphic context where choice is between alternatives involving different types of reinforcers. The present experiment is the first to examine discounting of drug and nondrug reinforcers in both isomorphic and allomorphic situations using a theoretical model (i.e., the hyperbolic discounting function) that allows for comparisons of discounting rates between reinforcer types and amounts. The goal of the current experiment was to examine discounting of a delayed, nondrug reinforcer (food) by male rhesus monkeys when the immediate alternative was either food (isomorphic situation) or cocaine (allomorphic situation). In addition, we sought to determine whether there was a magnitude effect with delayed food in the allomorphic situation. Choice of immediate food and immediate cocaine increased with amount and dose, respectively. Choice functions for immediate food and cocaine generally shifted leftward as delay increased. Compared to isomorphic situations in which food was the immediate alternative, delayed food was discounted more steeply in allomorphic situations where cocaine was the immediate alternative. Notably, discounting was not affected by the magnitude of the delayed reinforcer. These data indicate that how steeply a delayed nondrug reinforcer is discounted may depend more on the qualitative characteristics of the immediate reinforcer and less on the magnitude of the delayed one. (c) 2015 APA, all rights reserved).

  14. An Investigation of Certain Thermodynamic Losses in Minature Cryocoolers

    DTIC Science & Technology

    2005-01-17

    enable efficiencies to be increased not just in Stirling type coolers, but also in pulse tubes and linear alternators...theoretical work which will enable efficiencies to be increased not just in Stirling type coolers, but also in pulse tubes and linear alternators. 4 1...Investigation of how these losses scale to a geometry closer to that in a full Stirling or pulse tube cooler. This will involve the addition of a

  15. Linearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization

    DTIC Science & Technology

    2016-11-22

    structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The

  16. Electrocatalysis of fuel cell reactions: Investigation of alternate electrolytes

    NASA Technical Reports Server (NTRS)

    Chin, D. T.; Hsueh, K. L.; Chang, H. H.

    1983-01-01

    Oxygen reduction and transport properties of the electrolyte in the phosphoric acid fuel cell are studied. A theoretical expression for the rotating ring-disk electrode technique; the intermediate reaction rate constants for oxygen reduction on platinum in phosphoric acid electrolyte; oxygen reduction mechanism in trifluoromethanesulfonic acid (TFMSA), considered as an alternate electrolyte for the acid fuel cells; and transport properties of the phosphoric acid electrolyte at high concentrations and temperatures are covered.

  17. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using

  18. Sharing Resources In Mobile/Satellite Communications

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee; Sue, Miles K.

    1992-01-01

    Report presents preliminary theoretical analysis of several alternative schemes for allocation of satellite resource among terrestrial subscribers of landmobile/satellite communication system. Demand-access and random-access approaches under code-division and frequency-division concepts compared.

  19. A general modeling framework for describing spatially structured population dynamics

    USGS Publications Warehouse

    Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan

    2017-01-01

    Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance that comparative analyses are colored by model details rather than general principles

  20. From theory to experimental design-Quantifying a trait-based theory of predator-prey dynamics.

    PubMed

    Laubmeier, A N; Wootton, Kate; Banks, J E; Bommarco, Riccardo; Curtsdotter, Alva; Jonsson, Tomas; Roslin, Tomas; Banks, H T

    2018-01-01

    Successfully applying theoretical models to natural communities and predicting ecosystem behavior under changing conditions is the backbone of predictive ecology. However, the experiments required to test these models are dictated by practical constraints, and models are often opportunistically validated against data for which they were never intended. Alternatively, we can inform and improve experimental design by an in-depth pre-experimental analysis of the model, generating experiments better targeted at testing the validity of a theory. Here, we describe this process for a specific experiment. Starting from food web ecological theory, we formulate a model and design an experiment to optimally test the validity of the theory, supplementing traditional design considerations with model analysis. The experiment itself will be run and described in a separate paper. The theory we test is that trophic population dynamics are dictated by species traits, and we study this in a community of terrestrial arthropods. We depart from the Allometric Trophic Network (ATN) model and hypothesize that including habitat use, in addition to body mass, is necessary to better model trophic interactions. We therefore formulate new terms which account for micro-habitat use as well as intra- and interspecific interference in the ATN model. We design an experiment and an effective sampling regime to test this model and the underlying assumptions about the traits dominating trophic interactions. We arrive at a detailed sampling protocol to maximize information content in the empirical data obtained from the experiment and, relying on theoretical analysis of the proposed model, explore potential shortcomings of our design. Consequently, since this is a "pre-experimental" exercise aimed at improving the links between hypothesis formulation, model construction, experimental design and data collection, we hasten to publish our findings before analyzing data from the actual experiment, thus setting the stage for strong inference.

  1. Advancing nursing practice: redefining the theoretical and practical integration of knowledge.

    PubMed

    Christensen, Martin

    2011-03-01

    The aim of this paper is to offer an alternative knowing-how knowing-that framework of nursing knowledge, which in the past has been accepted as the provenance of advanced practice. The concept of advancing practice is central to the development of nursing practice and has been seen to take on many different forms depending on its use in context. To many it has become synonymous with the work of the advanced or expert practitioner; others have viewed it as a process of continuing professional development and skills acquisition. Moreover, it is becoming closely linked with practice development. However, there is much discussion as to what constitutes the knowledge necessary for advancing and advanced practice, and it has been suggested that theoretical and practical knowledge form the cornerstone of advanced knowledge. The design of this article takes a discursive approach as to the meaning and integration of knowledge within the context of advancing nursing practice. A thematic analysis of the current discourse relating to knowledge integration models in an advancing and advanced practice arena was used to identify concurrent themes relating to the knowing-how knowing-that framework which commonly used to classify the knowledge necessary for advanced nursing practice. There is a dichotomy as to what constitutes knowledge for advanced and advancing practice. Several authors have offered a variety of differing models, yet it is the application and integration of theoretical and practical knowledge that defines and develops the advancement of nursing practice. An alternative framework offered here may allow differences in the way that nursing knowledge important for advancing practice is perceived, developed and coordinated. What has inevitably been neglected is that there are various other variables which when transposed into the existing knowing-how knowing-that framework allows for advanced knowledge to be better defined. One of the more notable variables is pattern recognition, which became the focus of Benner's work on expert practice. Therefore, if this is included into the knowing-how knowing-that framework, the knowing-how becomes the knowledge that contributes to advancing and advanced practice and the knowing-that becomes the governing action based on a deeper understanding of the problem or issue. © 2011 Blackwell Publishing Ltd.

  2. A comprehensive iso-octane combustion model with improved thermochemistry and chemical kinetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atef, Nour; Kukkadapu, Goutham; Mohamed, Samah Y.

    Iso-Octane (2,2,4-trimethylpentane) is a primary reference fuel and an important component of gasoline fuels. Furthermore, it is a key component used in surrogates to study the ignition and burning characteristics of gasoline fuels. This paper presents an updated chemical kinetic model for iso-octane combustion. Specifically, the thermodynamic data and reaction kinetics of iso-octane have been re-assessed based on new thermodynamic group values and recently evaluated rate coefficients from the literature. The adopted rate coefficients were either experimentally measured or determined by analogy to theoretically calculated values. New alternative isomerization pathways for peroxy-alkyl hydroperoxide (more » $$\\dot{O}$$OQOOH) radicals were added to the reaction mechanism. The updated kinetic model was compared against new ignition delay data measured in rapid compression machines (RCM) and a high-pressure shock tube. Our experiments were conducted at pressures of 20 and 40 atm, at equivalence ratios of 0.4 and 1.0, and at temperatures in the range of 632–1060 K. The updated model was further compared against shock tube ignition delay times, jet-stirred reactor oxidation speciation data, premixed laminar flame speeds, counterflow diffusion flame ignition, and shock tube pyrolysis speciation data available in the literature. Finally, the updated model was used to investigate the importance of alternative isomerization pathways in the low temperature oxidation of highly branched alkanes. When compared to available models in the literature, the present model represents the current state-of-the-art in fundamental thermochemistry and reaction kinetics of iso-octane; and thus provides the best prediction of wide ranging experimental data and fundamental insights into iso-octane combustion chemistry.« less

  3. A comprehensive iso-octane combustion model with improved thermochemistry and chemical kinetics

    DOE PAGES

    Atef, Nour; Kukkadapu, Goutham; Mohamed, Samah Y.; ...

    2017-02-05

    Iso-Octane (2,2,4-trimethylpentane) is a primary reference fuel and an important component of gasoline fuels. Furthermore, it is a key component used in surrogates to study the ignition and burning characteristics of gasoline fuels. This paper presents an updated chemical kinetic model for iso-octane combustion. Specifically, the thermodynamic data and reaction kinetics of iso-octane have been re-assessed based on new thermodynamic group values and recently evaluated rate coefficients from the literature. The adopted rate coefficients were either experimentally measured or determined by analogy to theoretically calculated values. New alternative isomerization pathways for peroxy-alkyl hydroperoxide (more » $$\\dot{O}$$OQOOH) radicals were added to the reaction mechanism. The updated kinetic model was compared against new ignition delay data measured in rapid compression machines (RCM) and a high-pressure shock tube. Our experiments were conducted at pressures of 20 and 40 atm, at equivalence ratios of 0.4 and 1.0, and at temperatures in the range of 632–1060 K. The updated model was further compared against shock tube ignition delay times, jet-stirred reactor oxidation speciation data, premixed laminar flame speeds, counterflow diffusion flame ignition, and shock tube pyrolysis speciation data available in the literature. Finally, the updated model was used to investigate the importance of alternative isomerization pathways in the low temperature oxidation of highly branched alkanes. When compared to available models in the literature, the present model represents the current state-of-the-art in fundamental thermochemistry and reaction kinetics of iso-octane; and thus provides the best prediction of wide ranging experimental data and fundamental insights into iso-octane combustion chemistry.« less

  4. Design Issues for Using Magnetic Materials in Radiation Environments at Elevated Temperature

    NASA Technical Reports Server (NTRS)

    Bowman, Cheryl L.

    2013-01-01

    One of the challenges of designing motors and alternators for use in nuclear powered space missions is accounting for the effects of radiation. Terrestrial reactor power plants use distance and shielding to minimize radiation damage but space missions must economize volume and mass. Past studies have shown that sufficiently high radiation levels can affect the magnetic response of hard and soft magnetic materials. Theoretical models explaining the radiation-induced degradation have been proposed but not verified. This paper reviews the literature and explains the cumulative effects of temperature, magnetic-load, and radiation-level on the magnetic properties of component materials. Magnetic property degradation is very specific to alloy choice and processing history, since magnetic properties are very much entwined with specific chemistry and microstructural features. However, there is basic theoretical as well as supportive experimental evidence that the negative impact to magnetic properties will be minimal if the bulk temperature of the material is less than fifty percent of the Curie temperature, the radiation flux is low, and the demagnetization field is small. Keywords: Magnets, Permanent Magnets, Power Converters, Nuclear Electric Power Generation, Radiation Tolerance.

  5. Generalization of the Poincare sphere to process 2D displacement signals

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Lamberti, Luciano

    2017-06-01

    Traditionally the multiple phase method has been considered as an essential tool for phase information recovery. The in-quadrature phase method that theoretically is an alternative pathway to achieve the same goal failed in actual applications. The authors in a previous paper dealing with 1D signals have shown that properly implemented the in-quadrature method yields phase values with the same accuracy than the multiple phase method. The present paper extends the methodology developed in 1D to 2D. This extension is not a straight forward process and requires the introduction of a number of additional concepts and developments. The concept of monogenic function provides the necessary tools required for the extension process. The monogenic function has a graphic representation through the Poincare sphere familiar in the field of Photoelasticity and through the developments introduced in this paper connected to the analysis of displacement fringe patterns. The paper is illustrated with examples of application that show that multiple phases method and the in-quadrature are two aspects of the same basic theoretical model.

  6. Mechanism change in a simulation of peer review: from junk support to elitism.

    PubMed

    Paolucci, Mario; Grimaldo, Francisco

    2014-01-01

    Peer review works as the hinge of the scientific process, mediating between research and the awareness/acceptance of its results. While it might seem obvious that science would regulate itself scientifically, the consensus on peer review is eroding; a deeper understanding of its workings and potential alternatives is sorely needed. Employing a theoretical approach supported by agent-based simulation, we examined computational models of peer review, performing what we propose to call redesign , that is, the replication of simulations using different mechanisms . Here, we show that we are able to obtain the high sensitivity to rational cheating that is present in literature. In addition, we also show how this result appears to be fragile against small variations in mechanisms. Therefore, we argue that exploration of the parameter space is not enough if we want to support theoretical statements with simulation, and that exploration at the level of mechanisms is needed. These findings also support prudence in the application of simulation results based on single mechanisms, and endorse the use of complex agent platforms that encourage experimentation of diverse mechanisms.

  7. Solvating additives drive solution-mediated electrochemistry and enhance toroid growth in non-aqueous Li-O2 batteries

    NASA Astrophysics Data System (ADS)

    Aetukuri, Nagaphani B.; McCloskey, Bryan D.; García, Jeannette M.; Krupp, Leslie E.; Viswanathan, Venkatasubramanian; Luntz, Alan C.

    2015-01-01

    Given their high theoretical specific energy, lithium-oxygen batteries have received enormous attention as possible alternatives to current state-of-the-art rechargeable Li-ion batteries. However, the maximum discharge capacity in non-aqueous lithium-oxygen batteries is limited to a small fraction of its theoretical value due to the build-up of insulating lithium peroxide (Li2O2), the battery’s primary discharge product. The discharge capacity can be increased if Li2O2 forms as large toroidal particles rather than as a thin conformal layer. Here, we show that trace amounts of electrolyte additives, such as H2O, enhance the formation of Li2O2 toroids and result in significant improvements in capacity. Our experimental observations and a growth model show that the solvating properties of the additives prompt a solution-based mechanism that is responsible for the growth of Li2O2 toroids. We present a general formalism describing an additive’s tendency to trigger the solution process, providing a rational design route for electrolytes that afford larger lithium-oxygen battery capacities.

  8. The effect of testing versus restudy on retention: a meta-analytic review of the testing effect.

    PubMed

    Rowland, Christopher A

    2014-11-01

    Engaging in a test over previously studied information can serve as a potent learning event, a phenomenon referred to as the testing effect. Despite a surge of research in the past decade, existing theories have not yet provided a cohesive account of testing phenomena. The present study uses meta-analysis to examine the effects of testing versus restudy on retention. Key results indicate support for the role of effortful processing as a contributor to the testing effect, with initial recall tests yielding larger testing benefits than recognition tests. Limited support was found for existing theoretical accounts attributing the testing effect to enhanced semantic elaboration, indicating that consideration of alternative mechanisms is warranted in explaining testing effects. Future theoretical accounts of the testing effect may benefit from consideration of episodic and contextually derived contributions to retention resulting from memory retrieval. Additionally, the bifurcation model of the testing effect is considered as a viable framework from which to characterize the patterns of results present across the literature. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  9. Some guidelines for structural equation modelling in cognitive neuroscience: the case of Charlton et al.'s study on white matter integrity and cognitive ageing.

    PubMed

    Penke, Lars; Deary, Ian J

    2010-09-01

    Charlton et al. (2008) (Charlton, R.A., Landua, S., Schiavone, F., Barrick, T.R., Clark, C.A., Markus, H.S., Morris, R.G.A., 2008. Structural equation modelling investigation of age-related variance in executive function and DTI-measured white matter change. Neurobiol. Aging 29, 1547-1555) presented a model that suggests a specific age-related effect of white matter integrity on working memory. We illustrate potential pitfalls of structural equation modelling by criticizing their model for (a) its neglect of latent variables, (b) its complexity, (c) its questionable causal assumptions, (d) the use of empirical model reduction, (e) the mix-up of theoretical perspectives, and (f) the failure to compare alternative models. We show that a more parsimonious model, based solely on the well-established general factor of cognitive ability, fits their data at least as well. Importantly, when modelled this way there is no support for a role of white matter integrity in cognitive aging in this sample, indicating that their conclusion is strongly dependent on how the data are analysed. We suggest that evidence from more conclusive study designs is needed. Copyright 2009 Elsevier Inc. All rights reserved.

  10. Quantitative aspects of vibratory mobilization and break-up of non-wetting fluids in porous media

    NASA Astrophysics Data System (ADS)

    Deng, Wen

    Seismic stimulation is a promising technology aimed to mobilize the entrapped non-wetting fluids in the subsurface. The applications include enhanced oil recovery or, alternatively, facilitation of movement of immiscible/partly-miscible gases far into porous media, for example, for CO2 sequestration. This work is devoted to detailed quantitative studies of the two basic pore-scale mechanisms standing behind seismic stimulation: the mobilization of bubbles or drops entrapped in pore constrictions by capillary forces and the break-up of continuous long bubbles or drops. In typical oil-production operations, oil is produced by the natural reservoir-pressure drive during the primary stage and by artificial water flooding at the secondary stage. Capillary forces act to retain a substantial residual fraction of reservoir oil even after water flooding. The seismic stimulation is an unconventional technology that serves to overcome capillary barriers in individual pores and liberate the entrapped oil by adding an oscillatory inertial forcing to the external pressure gradient. According to our study, the effect of seismic stimulation on oil mobilization is highly dependent on the frequencies and amplitudes of the seismic waves. Generally, the lower the frequency and the larger the amplitude, more effective is the mobilization. To describe the mobilization process, we developed two theoretical hydrodynamics-based models and justified both using computational fluid dynamics (CFD). Our theoretical models have a significant advantage over CFD in that they reduce the computational time significantly, while providing correct practical guidance regarding the required field parameters of vibroseismic stimulation, such as the amplitude and frequency of the seismic field. The models also provide important insights into the basic mechanisms governing the vibration-driven two-phase flow in constricted capillaries. In a waterflooded reservoir, oil can be recovered most efficiently by forming continuous streams from isolated droplets. The longer the continuous oil phase under a certain pressure gradient, the more easily it overcomes its capillary barrier. However, surface tension between water and oil causes the typically non-wetting oil, constituting the core phase in the channels, to break up at the pore constriction into isolated beads, which inhibits further motion. The break-up thus counteracts the mobilization. We developed a theoretical model that provides an exact quantitative description of the dynamics of the oil-snap-off process. It also formulates a purely geometric criterion that controls, based on pore geometry only, whether the oil core phase stays continuous or disintegrates into droplets. Both the theoretical model and the break-criterion have been validated against CFD simulations. The work completed elucidates the basic physical mechanisms behind the enhanced oil recovery by seismic waves and vibrations. This creates a theoretical foundation for the further development of corresponding field technologies.

  11. Validation of alternative methods for toxicity testing.

    PubMed Central

    Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M

    1998-01-01

    Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695

  12. Enhanced model-based design of a high-throughput three dimensional micromixer driven by alternating-current electrothermal flow.

    PubMed

    Wu, Yupan; Ren, Yukun; Jiang, Hongyuan

    2017-01-01

    We propose a 3D microfluidic mixer based on the alternating current electrothermal (ACET) flow. The ACET vortex is produced by 3D electrodes embedded in the sidewall of the microchannel and is used to stir the fluidic sample throughout the entire channel depth. An optimized geometrical structure of the proposed 3D micromixer device is obtained based on the enhanced theoretical model of ACET flow and natural convection. We quantitatively analyze the flow field driven by the ACET, and a pattern of electrothermal microvortex is visualized by the micro-particle imaging velocimetry. Then, the mixing experiment is conducted using dye solutions with varying solution conductivities. Mixing efficiency can exceed 90% for electrolytes with 0.2 S/m (1 S/m) when the flow rate is 0.364 μL/min (0.728 μL/min) and the imposed peak-to-peak voltage is 52.5 V (35 V). A critical analysis of our micromixer in comparison with different mixer designs using a comparative mixing index is also performed. The ACET micromixer embedded with sidewall 3D electrodes can achieve a highly effective mixing performance and can generate high throughput in the continuous-flow condition. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. An alternative interpretation for cosmic ray peaks

    DOE PAGES

    Kim, Doojin; Park, Jong -Chul

    2015-10-03

    We propose an alternative mechanism based upon dark matter (DM) interpretation for anomalous peak signatures in cosmic ray measurements, assuming an extended dark sector with two DM species. This is contrasted with previous effort to explain various line-like cosmic-ray excesses in the context of DM models where the relevant DM candidate directly annihilates into Standard Model (SM) particles. The heavier DM is assumed to annihilate to an on-shell intermediate state. As the simplest choice, it decays directly into the lighter DM along with an unstable particle which in turn decays to a pair of SM states corresponding to the interestingmore » cosmic anomaly. We show that a sharp continuum energy peak can be readily generated under the proposed DM scenario, depending on dark sector particle mass spectra. Remarkably, such a peak is robustly identified as half the mass of the unstable particle. Furthermore, other underlying mass parameters are analytically related to the shape of energy spectrum. We apply this idea to the two well-known line excesses in the cosmic photon spectrum: 130 GeV γ-ray line and 3.5 keV X-ray line. As a result, each observed peak spectrum is well-reproduced by theoretical expectation predicated upon our suggested mechanism, and moreover, our resulting best fits provide rather improved χ 2 values.« less

  14. Potential for the dynamics of pedestrians in a socially interacting group

    NASA Astrophysics Data System (ADS)

    Zanlungo, Francesco; Ikeda, Tetsushi; Kanda, Takayuki

    2014-01-01

    We introduce a simple potential to describe the dynamics of the relative motion of two pedestrians socially interacting in a walking group. We show that the proposed potential, based on basic empirical observations and theoretical considerations, can qualitatively describe the statistical properties of pedestrian behavior. In detail, we show that the two-dimensional probability distribution of the relative distance is determined by the proposed potential through a Boltzmann distribution. After calibrating the parameters of the model on the two-pedestrian group data, we apply the model to three-pedestrian groups, showing that it describes qualitatively and quantitatively well their behavior. In particular, the model predicts that three-pedestrian groups walk in a V-shaped formation and provides accurate values for the position of the three pedestrians. Furthermore, the model correctly predicts the average walking velocity of three-person groups based on the velocity of two-person ones. Possible extensions to larger groups, along with alternative explanations of the social dynamics that may be implied by our model, are discussed at the end of the paper.

  15. On Measuring the Sixth Basic Personality Dimension: A Comparison Between HEXACO Honesty-Humility and Big Six Honesty-Propriety.

    PubMed

    Thielmann, Isabel; Hilbig, Benjamin E; Zettler, Ingo; Moshagen, Morten

    2017-12-01

    Recent developments in personality research led to the proposition of two alternative six-factor trait models, the HEXACO model and the Big Six model. However, given the lack of direct comparisons, it is unclear whether the HEXACO and Big Six factors are distinct or essentially equivalent, that is, whether corresponding inventories measure similar or distinct personality traits. Using Structural Equation Modeling (Study 1), we found substantial differences between the traits as measured via the HEXACO-60 and the 30-item Questionnaire Big Six (30QB6), particularly for Honesty-Humility and Honesty-Propriety (both model's critical difference to the Big Five approach). This distinction was further supported by Study 2, showing differential capabilities of the HEXACO-60 and the 30QB6 to account for several criteria representing the theoretical core of Honesty-Humility and/or Honesty-Propriety. Specifically, unlike the indicator of Honesty-Humility, the indicator of Honesty-Propriety showed low predictive power for some conceptually relevant criteria, suggesting a limited validity of the 30QB6.

  16. Alternative hypotheses to explain why biodiversity-ecosystem functioning relationships are concave-up in some natural ecosystems but concave-down in manipulative experiments.

    PubMed

    Mora, Camilo; Danovaro, Roberto; Loreau, Michel

    2014-06-25

    Recent studies of the relationship between biodiversity and functioning in marine ecosystems have yielded non-saturating patterns that contrast sharply with the results of experimental studies, where ecosystem functioning rapidly saturates with increases in biodiversity. Here we provide a simple theoretical framework of three alternative hypotheses that, individually or combined, are likely to explain this contrast: i) the use of functional richness instead of species richness, ii) an increased production efficiency of species in producing biomass when more ecological interactions are present, and iii) the fact that communities are likely assembled in an ordered succession of species from low to high ecological efficiency. Our results provide theoretical support for concave-up biodiversity-ecosystem functioning relationships in natural ecosystems and confirm that the loss of species can have substantially larger effects on the functioning of natural ecosystems than anticipated from controlled manipulative experiments.

  17. Alternative hypotheses to explain why biodiversity-ecosystem functioning relationships are concave-up in some natural ecosystems but concave-down in manipulative experiments

    PubMed Central

    Mora, Camilo; Danovaro, Roberto; Loreau, Michel

    2014-01-01

    Recent studies of the relationship between biodiversity and functioning in marine ecosystems have yielded non-saturating patterns that contrast sharply with the results of experimental studies, where ecosystem functioning rapidly saturates with increases in biodiversity. Here we provide a simple theoretical framework of three alternative hypotheses that, individually or combined, are likely to explain this contrast: i) the use of functional richness instead of species richness, ii) an increased production efficiency of species in producing biomass when more ecological interactions are present, and iii) the fact that communities are likely assembled in an ordered succession of species from low to high ecological efficiency. Our results provide theoretical support for concave-up biodiversity-ecosystem functioning relationships in natural ecosystems and confirm that the loss of species can have substantially larger effects on the functioning of natural ecosystems than anticipated from controlled manipulative experiments. PMID:24962477

  18. [Sustainable Implementation of Evidence-Based Programmes in Health Promotion: A Theoretical Framework and Concept of Interactive Knowledge to Action].

    PubMed

    Rütten, A; Wolff, A; Streber, A

    2016-03-01

    This article discusses 2 current issues in the field of public health research: (i) transfer of scientific knowledge into practice and (ii) sustainable implementation of good practice projects. It also supports integration of scientific and practice-based evidence production. Furthermore, it supports utilisation of interactive models that transcend deductive approaches to the process of knowledge transfer. Existing theoretical approaches, pilot studies and thoughtful conceptual considerations are incorporated into a framework showing the interplay of science, politics and prevention practice, which fosters a more sustainable implementation of health promotion programmes. The framework depicts 4 key processes of interaction between science and prevention practice: interactive knowledge to action, capacity building, programme adaptation and adaptation of the implementation context. Ensuring sustainability of health promotion programmes requires a concentrated process of integrating scientific and practice-based evidence production in the context of implementation. Central to the integration process is the approach of interactive knowledge to action, which especially benefits from capacity building processes that facilitate participation and systematic interaction between relevant stakeholders. Intense cooperation also induces a dynamic interaction between multiple actors and components such as health promotion programmes, target groups, relevant organisations and social, cultural and political contexts. The reciprocal adaptation of programmes and key components of the implementation context can foster effectiveness and sustainability of programmes. Sustainable implementation of evidence-based health promotion programmes requires alternatives to recent deductive models of knowledge transfer. Interactive approaches prove to be promising alternatives. Simultaneously, they change the responsibilities of science, policy and public health practice. Existing boundaries within disciplines and sectors are overcome by arranging transdisciplinary teams as well as by developing common agendas and procedures. Such approaches also require adaptations of the structure of research projects such as extending the length of funding. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Estimation of Leaf Area Index and Plant Area Index of a Submerged Macrophyte Canopy Using Digital Photography

    PubMed Central

    Zhao, Dehua; Xie, Dong; Zhou, Hengjie; Jiang, Hao; An, Shuqing

    2012-01-01

    Non-destructive estimation using digital cameras is a common approach for estimating leaf area index (LAI) of terrestrial vegetation. However, no attempt has been made so far to develop non-destructive approaches to LAI estimation for aquatic vegetation. Using the submerged plant species Potamogeton malainus, the objective of this study was to determine whether the gap fraction derived from vertical photographs could be used to estimate LAI of aquatic vegetation. Our results suggested that upward-oriented photographs taken from beneath the water surface were more suitable for distinguishing vegetation from other objects than were downward-oriented photographs taken from above the water surface. Exposure settings had a substantial influence on the identification of vegetation in upward-oriented photographs. Automatic exposure performed nearly as well as the optimal trial exposure, making it a good choice for operational convenience. Similar to terrestrial vegetation, our results suggested that photographs taken for the purpose of distinguishing gap fraction in aquatic vegetation should be taken under diffuse light conditions. Significant logarithmic relationships were observed between the vertical gap fraction derived from upward-oriented photographs and plant area index (PAI) and LAI derived from destructive harvesting. The model we developed to depict the relationship between PAI and gap fraction was similar to the modified theoretical Poisson model, with coefficients of 1.82 and 1.90 for our model and the theoretical model, respectively. This suggests that vertical upward-oriented photographs taken from below the water surface are a feasible alternative to destructive harvesting for estimating PAI and LAI for the submerged aquatic plant Potamogeton malainus. PMID:23226557

  20. Excavating black hole continuum spectrum: Possible signatures of scalar hairs and of higher dimensions

    NASA Astrophysics Data System (ADS)

    Banerjee, Indrani; Chakraborty, Sumanta; SenGupta, Soumitra

    2017-10-01

    Continuum spectrum from black hole accretion disc holds enormous information regarding the strong gravity regime around the black hole and hence about the nature of gravitational interaction in extreme situations. Since in such strong gravity regime the dynamics of gravity should be modified from the Einstein-Hilbert one, its effect should be imprinted on the continuum spectrum originating from the black hole accretion. To explore the effects of these alternative theories on the black hole continuum spectrum in an explicit manner, we have discussed three alternative gravitational models having their origin in three distinct paradigms—(a) higher dimensions, (b) higher curvature gravity, and (c) generalized Horndeski theories. All of them can have signatures sculptured on the black hole continuum spectrum, distinct from the standard general relativistic scenario. Interestingly all these models exhibit black hole solutions with tidal charge parameter which in these alternative gravity scenarios can become negative, in sharp contrast with the Reissner-Nordström black hole. Using the observational data of optical luminosity for eighty Palomer Green quasars we have illustrated that the difference between the theoretical estimates and the observational results gets minimized for negative values of the tidal charge parameter. As a quantitative estimate of this result we concentrate on several error estimators, including reduced χ2 , Nash-Sutcliffe efficiency, index of agreement etc. Remarkably, all of them indicates a negative value of the tidal charge parameter, signaling the possibility of higher dimensions as well as scalar charge at play in those high gravity regimes.

  1. Radiobiological equivalent of low/high dose rate brachytherapy and evaluation of tumor and normal responses to the dose.

    PubMed

    Manimaran, S

    2007-06-01

    The aim of this study was to compare the biological equivalent of low-dose-rate (LDR) and high-dose-rate (HDR) brachytherapy in terms of the more recent linear quadratic (LQ) model, which leads to theoretical estimation of biological equivalence. One of the key features of the LQ model is that it allows a more systematic radiobiological comparison between different types of treatment because the main parameters alpha/beta and micro are tissue-specific. Such comparisons also allow assessment of the likely change in the therapeutic ratio when switching between LDR and HDR treatments. The main application of LQ methodology, which focuses on by increasing the availability of remote afterloading units, has been to design fractionated HDR treatments that can replace existing LDR techniques. In this study, with LDR treatments (39 Gy in 48 h) equivalent to 11 fractions of HDR irradiation at the experimental level, there are increasing reports of reproducible animal models that may be used to investigate the biological basis of brachytherapy and to help confirm theoretical predictions. This is a timely development owing to the nonavailability of sufficient retrospective patient data analysis. It appears that HDR brachytherapy is likely to be a viable alternative to LDR only if it is delivered without a prohibitively large number of fractions (e.g., fewer than 11). With increased scientific understanding and technological capability, the prospect of a dose equivalent to HDR brachytherapy will allow greater utilization of the concepts discussed in this article.

  2. Green Function Calculations of Properties for the Magnetocaloric Layered Structures Based Upon FeMnAsP

    NASA Astrophysics Data System (ADS)

    Schilling, Osvaldo F.

    2016-11-01

    The alternating Fe-Mn layered structures of the compounds FeMnAsxP1-x display properties which have been demonstrated experimentally as very promising as far as commercial applications of the magnetocaloric effect are concerned. However, the theoretical literature on this and other families of magnetocaloric compounds still adopts simple molecular-field models in the description of important statistical mechanical properties like the entropy variation that accompanies applied isothermal magnetic field cycling, as well as the temperature variation following adiabatic magnetic field cycles. In the present paper, a random phase approximation Green function theoretical treatment is applied to such structures. The advantages of such approach are well known since the details of the crystal structure are easily incorporated in the model, as well as a precise description of correlations between neighbor spins can be obtained. We focus on a simple one-exchange parameter Heisenberg model, and the observed first-order phase transitions are reproduced by the introduction of a biquadratic term in the Hamiltonian whose origin is related both to the magnetoelastic coupling with the phonon spectrum in these compounds as well as with the values of spins in the Fe and Mn ions. The calculations are compared with experimental magnetocaloric data for the FeMnAsxP1-x compounds. In particular, the magnetic field dependence for the entropy variation at the transition temperature predicted from the Landau theory of continuous phase transitions is reproduced even in the case of discontinuous transitions.

  3. Knowledge diffusion in social work: a new approach to bridging the gap.

    PubMed

    Herie, Marilyn; Martin, Garth W

    2002-01-01

    The continuing gap between research and practice has long been a problem in social work. A great deal of the empirical practice literature has emphasized practice evaluation (usually in the form of single-case methodologies) at the expense of research dissemination and utilization. An alternative focus for social work researchers can be found in the extensive theoretical and research literature on knowledge diffusion, technology transfer, and social marketing. Knowledge diffusion and social marketing theory is explored in terms of its relevance to social work education and practice, including a consideration of issues of culture and power. The authors present an integrated dissemination model for social work and use a case example to illustrate the practical application of the model. The OPTIONS (OutPatient Treatment In ONtario Services) project is an example of the effective dissemination of two research-based addiction treatment modalities to nearly 1,000 direct practice clinicians in Ontario, Canada.

  4. Anomalous Diffusion of Single Particles in Cytoplasm

    PubMed Central

    Regner, Benjamin M.; Vučinić, Dejan; Domnisoru, Cristina; Bartol, Thomas M.; Hetzer, Martin W.; Tartakovsky, Daniel M.; Sejnowski, Terrence J.

    2013-01-01

    The crowded intracellular environment poses a formidable challenge to experimental and theoretical analyses of intracellular transport mechanisms. Our measurements of single-particle trajectories in cytoplasm and their random-walk interpretations elucidate two of these mechanisms: molecular diffusion in crowded environments and cytoskeletal transport along microtubules. We employed acousto-optic deflector microscopy to map out the three-dimensional trajectories of microspheres migrating in the cytosolic fraction of a cellular extract. Classical Brownian motion (BM), continuous time random walk, and fractional BM were alternatively used to represent these trajectories. The comparison of the experimental and numerical data demonstrates that cytoskeletal transport along microtubules and diffusion in the cytosolic fraction exhibit anomalous (nonFickian) behavior and posses statistically distinct signatures. Among the three random-walk models used, continuous time random walk provides the best representation of diffusion, whereas microtubular transport is accurately modeled with fractional BM. PMID:23601312

  5. Imbedded-Fracture Formulation of THMC Processes in Fractured Media

    NASA Astrophysics Data System (ADS)

    Yeh, G. T.; Tsai, C. H.; Sung, R.

    2016-12-01

    Fractured media consist of porous materials and fracture networks. There exist four approaches to mathematically formulating THMC (Thermal-Hydrology-Mechanics-Chemistry) processes models in the system: (1) Equivalent Porous Media, (2) Dual Porosity or Dual Continuum, (3) Heterogeneous Media, and (4) Discrete Fracture Network. The first approach cannot explicitly explore the interactions between porous materials and fracture networks. The second approach introduces too many extra parameters (namely, exchange coefficients) between two media. The third approach may make the problems too stiff because the order of material heterogeneity may be too much. The fourth approach ignore the interaction between porous materials and fracture networks. This talk presents an alternative approach in which fracture networks are modeled with a lower dimension than the surrounding porous materials. Theoretical derivation of mathematical formulations will be given. An example will be illustrated to show the feasibility of this approach.

  6. Aminostratigraphy and faunal correlations of late Quaternary marine terraces, Pacific Coast, USA

    USGS Publications Warehouse

    Kennedy, G.L.; Lajoie, K.R.; Wehmiller, J.F.

    1982-01-01

    Recent studies using the extent of racemization of amino acids to date fossil mollusc shells in the Arctic1, the British Isles2 and on the Atlantic3,4 and Pacific5-13 coasts of North America have relied mainly on theoretical kinetic models of racemization. Ages generated in this fashion are highly model dependent and require estimates of integrated long-term diagenetic temperatures. We present here an alternative, empirical approach to aminostratigraphy in which we plot amino acid enantiomeric ratios versus latitude (for localities along the Pacific coast of the United States), and generate isochronal correlations by connecting data points of geographically proximal localities that have similar D:L ratios and zoogeographic aspect. Isochrons are calibrated at a few localities by independent radiometric dates. The diagenetic temperature effect on racemization is reflected in the slope of the isochrons, but the need to quantify temperature is eliminated. ?? 1982 Nature Publishing Group.

  7. Real-Time UV-Visible Spectroscopy Analysis of Purple Membrane-Polyacrylamide Film Formation Taking into Account Fano Line Shapes and Scattering

    PubMed Central

    Gomariz, María; Blaya, Salvador; Acebal, Pablo; Carretero, Luis

    2014-01-01

    We theoretically and experimentally analyze the formation of thick Purple Membrane (PM) polyacrylamide (PA) films by means of optical spectroscopy by considering the absorption of bacteriorhodopsin and scattering. We have applied semiclassical quantum mechanical techniques for the calculation of absorption spectra by taking into account the Fano effects on the ground state of bacteriorhodopsin. A model of the formation of PM-polyacrylamide films has been proposed based on the growth of polymeric chains around purple membrane. Experimentally, the temporal evolution of the polymerization process of acrylamide has been studied as function of the pH solution, obtaining a good correspondence to the proposed model. Thus, due to the formation of intermediate bacteriorhodopsin-doped nanogel, by controlling the polymerization process, an alternative methodology for the synthesis of bacteriorhodopsin-doped nanogels can be provided. PMID:25329473

  8. Real-time UV-visible spectroscopy analysis of purple membrane-polyacrylamide film formation taking into account Fano line shapes and scattering.

    PubMed

    Gomariz, María; Blaya, Salvador; Acebal, Pablo; Carretero, Luis

    2014-01-01

    We theoretically and experimentally analyze the formation of thick Purple Membrane (PM) polyacrylamide (PA) films by means of optical spectroscopy by considering the absorption of bacteriorhodopsin and scattering. We have applied semiclassical quantum mechanical techniques for the calculation of absorption spectra by taking into account the Fano effects on the ground state of bacteriorhodopsin. A model of the formation of PM-polyacrylamide films has been proposed based on the growth of polymeric chains around purple membrane. Experimentally, the temporal evolution of the polymerization process of acrylamide has been studied as function of the pH solution, obtaining a good correspondence to the proposed model. Thus, due to the formation of intermediate bacteriorhodopsin-doped nanogel, by controlling the polymerization process, an alternative methodology for the synthesis of bacteriorhodopsin-doped nanogels can be provided.

  9. Individual differences in processing styles: validity of the Rational-Experiential Inventory.

    PubMed

    Björklund, Fredrik; Bäckström, Martin

    2008-10-01

    In Study 1 (N= 203) the factor structure of a Swedish translation of Pacini and Epstein's Rational-Experiential Inventory (REI-40) was investigated using confirmatory factor analysis. The hypothesized model with rationality and experientiality as orthogonal factors had satisfactory fit to the data, significantly better than alternative models (with two correlated factors or a single factor). Inclusion of "ability" and "favorability" subscales for rationality and experientiality increased fit further. It was concluded that the structural validity of the REI is adequate. In Study 2 (N= 72) the REI-factors were shown to have theoretically meaningful correlations to other personality traits, indicating convergent and discriminant validity. Finally, scores on the rationality scale were negatively related to risky choice framing effects in Kahneman and Tversky's Asian disease task, indicating concurrent validity. On the basis of these findings it was concluded that the test has satisfactory psychometric properties.

  10. Other People’s Money: The Role of Reciprocity and Social Uncertainty in Decisions for Others

    PubMed Central

    2017-01-01

    Many important decisions are taken not by the person who will ultimately gain or lose from the outcome, but on their behalf, by somebody else. We examined economic decision-making about risk and time in situations in which deciders chose for others who also chose for them. We propose that this unique setting, which has not been studied before, elicits perception of reciprocity that prompts a unique bias in preferences. We found that decision-makers are less patient (more discounting), and more risk averse for losses than gains, with other peoples’ money, especially when their choices for others are more uncertain. Those results were derived by exploiting a computational modeling framework that has been shown to account for the underlying psychological and neural decision processes. We propose a novel theoretical mechanism—precautionary preferences under social uncertainty, which explains the findings. Implications for future research and alternative models are also discussed. PMID:29456782

  11. Mean-field and linear regime approach to magnetic hyperthermia of core-shell nanoparticles: can tiny nanostructures fight cancer?

    PubMed

    Carrião, Marcus S; Bakuzis, Andris F

    2016-04-21

    The phenomenon of heat dissipation by magnetic materials interacting with an alternating magnetic field, known as magnetic hyperthermia, is an emergent and promising therapy for many diseases, mainly cancer. Here, a magnetic hyperthermia model for core-shell nanoparticles is developed. The theoretical calculation, different from previous models, highlights the importance of heterogeneity by identifying the role of surface and core spins on nanoparticle heat generation. We found that the most efficient nanoparticles should be obtained by selecting materials to reduce the surface to core damping factor ratio, increasing the interface exchange parameter and tuning the surface to core anisotropy ratio for each material combination. From our results we propose a novel heat-based hyperthermia strategy with the focus on improving the heating efficiency of small sized nanoparticles instead of larger ones. This approach might have important implications for cancer treatment and could help improving clinical efficacy.

  12. Numerical simulation of mechanical properties tests of tungsten mud waste geopolymer

    NASA Astrophysics Data System (ADS)

    Paszek, Natalia; Krystek, Małgorzata

    2018-03-01

    Geopolymers are believed to become in the future an environmental friendly alternative for the concrete. The low CO2 emission during the production process and the possibility of ecological management of the industrial wastes are mentioned as main advantages of geopolymers. The main drawback, causing problems with application of geopolymers as a building material is the lack of the theoretical material model. Indicated problem is being solved now by the group of scientists from the Silesian University of Technology. The series of laboratory tests are carried out within the European research project REMINE. The paper introduces the numerical analyses of tungsten mud waste geopolymer samples which have been performed in the Atena software on the basis of the laboratory tests. Numerical models of bended and compressed samples of different shapes are presented in the paper. The results obtained in Atena software were compared with results obtained in Abaqus and Mafem3D software.

  13. Exploring unimolecular dissociation kinetics of ethyl dibromide through electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Gulvi, Nitin R.; Patel, Priyanka; Badani, Purav M.

    2018-04-01

    Pathway for dissociation of multihalogenated alkyls is observed to be competitive between molecular and atomic elimination products. Factors such as molecular structure, temperature and pressure are known to influence the same. Hence present work is focussed to explore mechanism and kinetics of atomic (Br) and molecular (HBr and Br2) elimination upon pyrolysis of 1,1- and 1,2-ethyl dibromide (EDB). For this purpose, electronic structure calculations were performed at DFT and CCSD(T) level of theory. In addition to concerted mechanism, an alternate energetically efficient isomerisation pathway has been exploited for molecular elimination. Energy calculations are further complimented by detailed kinetic investigation, over wide range of temperature and pressure, using suitable models like Canonical Transition State Theory, Statistical Adiabatic Channel Model and Troe's formalism. Our calculations suggest high branching ratio for dehydrohalogentation reaction, from both isomers of EDB. Fall off curve depicts good agreement between theoretically estimated and experimentally reported values.

  14. Arsenic removal from water employing a combined system: photooxidation and adsorption.

    PubMed

    Lescano, Maia; Zalazar, Cristina; Brandi, Rodolfo

    2015-03-01

    A combined system employing photochemical oxidation (UV/H2O2) and adsorption for arsenic removal from water was designed and evaluated. In this work, a bench-scale photochemical annular reactor was developed being connected alternately to a pair of adsorption columns filled with titanium dioxide (TiO2) and granular ferric hydroxide (GFH). The experiences were performed by varying the relation of As concentration (As (III)/As (V) weight ratio) at constant hydrogen peroxide concentration and incident radiation. Experimental oxidation results were compared with theoretical predictions using an intrinsic kinetic model previously obtained. In addition, the effectiveness of the process was evaluated using a groundwater sample. The mathematical model of the entire system was developed. It could be used as an effective tool for the design and prediction of the behaviour of these types of systems. The combined technology is efficient and promising for arsenic removal to small and medium scale.

  15. Theoretical analysis and modeling of Thickness-Expansion Mode (TEM) sensors for fluid characterization.

    PubMed

    Elvira, Luis; Resa, Pablo; Castro, Pedro

    2013-03-01

    In this paper, the principles of Thickness-Expansion Mode (TEM) resonators for the characterization of fluids are described. From the measurement of the resonance parameters of a TEM piezoelectric transducer, the compressional acoustic impedance of gases and liquids can be determined. Since the propagation of mechanical waves into the fluid is not necessary, information in a wide range of frequencies can be obtained. Alternatively, these sensors can be driven in combination with other ultrasonic techniques to simultaneously determine the density, speed of sound and viscosity of samples. Some potential applications include the probe monitoring of processes and the characterization of fluids under harsh conditions. The main experimental criteria for the design and construction of high-resolution impedance meters (such as piezoelectric material, protective coating or thermal response) have been studied using equivalent electrical circuit modeling and finite element analysis. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Electrical transport engineering of semiconductor superlattice structures

    NASA Astrophysics Data System (ADS)

    Shokri, Aliasghar

    2014-04-01

    We investigate the influence of doping concentration on band structures of electrons and electrical transmission in a typical aperiodic semiconductor superlattice consisting of quantum well and barrier layers, theoretically. For this purpose, we assume that each unit cell of the superlattice contains alternately two types of material GaAs (as a well) and GaAlAs (as a barrier) with six sublayers of two materials. Our calculations are based on the generalized Kronig-Penny (KP) model and the transfer matrix method within the framework of the parabolic conductance band effective mass approximation in the coherent regime. This model reduces the numerical calculation time and enables us to use the transfer matrix method to investigate transport in the superlattices. We show that by varying the doping concentration and geometrical parameters, one can easily block the transmission of the electrons. The numerical results may be useful in designing of nanoenergy filter devices.

  17. Standard Clock in primordial density perturbations and cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Chen, Xingang; Namjoo, Mohammad Hossein

    2014-12-01

    Standard Clocks in the primordial epoch leave a special type of features in the primordial perturbations, which can be used to directly measure the scale factor of the primordial universe as a function of time a (t), thus discriminating between inflation and alternatives. We have started to search for such signals in the Planck 2013 data using the key predictions of the Standard Clock. In this Letter, we summarize the key predictions of the Standard Clock and present an interesting candidate example in Planck 2013 data. Motivated by this candidate, we construct and compute full Standard Clock models and use the more complete prediction to make more extensive comparison with data. Although this candidate is not yet statistically significant, we use it to illustrate how Standard Clocks appear in Cosmic Microwave Background (CMB) and how they can be further tested by future data. We also use it to motivate more detailed theoretical model building.

  18. Signal Recovery and System Calibration from Multiple Compressive Poisson Measurements

    DOE PAGES

    Wang, Liming; Huang, Jiaji; Yuan, Xin; ...

    2015-09-17

    The measurement matrix employed in compressive sensing typically cannot be known precisely a priori and must be estimated via calibration. One may take multiple compressive measurements, from which the measurement matrix and underlying signals may be estimated jointly. This is of interest as well when the measurement matrix may change as a function of the details of what is measured. This problem has been considered recently for Gaussian measurement noise, and here we develop this idea with application to Poisson systems. A collaborative maximum likelihood algorithm and alternating proximal gradient algorithm are proposed, and associated theoretical performance guarantees are establishedmore » based on newly derived concentration-of-measure results. A Bayesian model is then introduced, to improve flexibility and generality. Connections between the maximum likelihood methods and the Bayesian model are developed, and example results are presented for a real compressive X-ray imaging system.« less

  19. How linear response shaped models of neural circuits and the quest for alternatives.

    PubMed

    Herfurth, Tim; Tchumatchenko, Tatjana

    2017-10-01

    In the past decades, many mathematical approaches to solve complex nonlinear systems in physics have been successfully applied to neuroscience. One of these tools is the concept of linear response functions. However, phenomena observed in the brain emerge from fundamentally nonlinear interactions and feedback loops rather than from a composition of linear filters. Here, we review the successes achieved by applying the linear response formalism to topics, such as rhythm generation and synchrony and by incorporating it into models that combine linear and nonlinear transformations. We also discuss the challenges encountered in the linear response applications and argue that new theoretical concepts are needed to tackle feedback loops and non-equilibrium dynamics which are experimentally observed in neural networks but are outside of the validity regime of the linear response formalism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Simple Process-Based Simulators for Generating Spatial Patterns of Habitat Loss and Fragmentation: A Review and Introduction to the G-RaFFe Model

    PubMed Central

    Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108

  1. Simple process-based simulators for generating spatial patterns of habitat loss and fragmentation: a review and introduction to the G-RaFFe model.

    PubMed

    Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.

  2. Model of heterogeneous material dissolution in simulated biological fluid

    NASA Astrophysics Data System (ADS)

    Knyazeva, A. G.; Gutmanas, E. Y.

    2015-11-01

    In orthopedic research, increasing attention is being paid to bioresorbable/biodegradable implants as an alternative to permanent metallic bone healing devices. Biodegradable metal based implants possessing high strength and ductility potentially can be used in load bearing sites. Biodegradable Mg and Fe are ductile and Fe possess high strength, but Mg degrades too fast and Fe degrades too slow, Ag is a noble metal and should cause galvanic corrosion of the more active metallic iron - thus, corrosion of Fe can be increased. Nanostructuring should results in higher strength and can result in higher rate of dissolution/degradation from grain boundaries. In this work, a simple dissolution model of heterogeneous three phase nanocomposite material is considered - two phases being metal Fe and Ag and the third - nanopores. Analytical solution for the model is presented. Calculations demonstrate that the changes in the relative amount of each phase depend on mass exchange and diffusion coefficients. Theoretical results agree with preliminary experimental results.

  3. The antigenic evolution of influenza: drift or thrift?

    PubMed Central

    Wikramaratna, Paul S.; Sandeman, Michi; Recker, Mario; Gupta, Sunetra

    2013-01-01

    It is commonly assumed that antibody responses against the influenza virus are polarized in the following manner: strong antibody responses are directed at highly variable antigenic epitopes, which consequently undergo ‘antigenic drift’, while weak antibody responses develop against conserved epitopes. As the highly variable epitopes are in a constant state of flux, current antibody-based vaccine strategies are focused on the conserved epitopes in the expectation that they will provide some level of clinical protection after appropriate boosting. Here, we use a theoretical model to suggest the existence of epitopes of low variability, which elicit a high degree of both clinical and transmission-blocking immunity. We show that several epidemiological features of influenza and its serological and molecular profiles are consistent with this model of ‘antigenic thrift’, and that identifying the protective epitopes of low variability predicted by this model could offer a more viable alternative to regularly update the influenza vaccine than exploiting responses to weakly immunogenic conserved regions. PMID:23382423

  4. Large strain cruciform biaxial testing for FLC detection

    NASA Astrophysics Data System (ADS)

    Güler, Baran; Efe, Mert

    2017-10-01

    Selection of proper test method, specimen design and analysis method are key issues for studying formability of sheet metals and detection of their forming limit curves (FLC). Materials with complex microstructures may need an additional micro-mechanical investigation and accurate modelling. Cruciform biaxial test stands as an alternative to standard tests as it achieves frictionless, in-plane, multi-axial stress states with a single sample geometry. In this study, we introduce a small-scale (less than 10 cm) cruciform sample allowing micro-mechanical investigation at stress states ranging from plane strain to equibiaxial. With successful specimen design and surface finish, large forming limit strains are obtained at the test region of the sample. The large forming limit strains obtained by experiments are compared to the values obtained from Marciniak-Kuczynski (M-K) local necking model and Cockroft-Latham damage model. This comparison shows that the experimental limiting strains are beyond the theoretical values, approaching to the fracture strain of the two test materials: Al-6061-T6 aluminum alloy and DC-04 high formability steel.

  5. Alternative Measures of Between-Study Heterogeneity in Meta-Analysis: Reducing the Impact of Outlying Studies

    PubMed Central

    Lin, Lifeng; Chu, Haitao; Hodges, James S.

    2016-01-01

    Summary Meta-analysis has become a widely used tool to combine results from independent studies. The collected studies are homogeneous if they share a common underlying true effect size; otherwise, they are heterogeneous. A fixed-effect model is customarily used when the studies are deemed homogeneous, while a random-effects model is used for heterogeneous studies. Assessing heterogeneity in meta-analysis is critical for model selection and decision making. Ideally, if heterogeneity is present, it should permeate the entire collection of studies, instead of being limited to a small number of outlying studies. Outliers can have great impact on conventional measures of heterogeneity and the conclusions of a meta-analysis. However, no widely accepted guidelines exist for handling outliers. This article proposes several new heterogeneity measures. In the presence of outliers, the proposed measures are less affected than the conventional ones. The performance of the proposed and conventional heterogeneity measures are compared theoretically, by studying their asymptotic properties, and empirically, using simulations and case studies. PMID:27167143

  6. Thinking as the control of imagination: a conceptual framework for goal-directed systems.

    PubMed

    Pezzulo, Giovanni; Castelfranchi, Cristiano

    2009-07-01

    This paper offers a conceptual framework which (re)integrates goal-directed control, motivational processes, and executive functions, and suggests a developmental pathway from situated action to higher level cognition. We first illustrate a basic computational (control-theoretic) model of goal-directed action that makes use of internal modeling. We then show that by adding the problem of selection among multiple action alternatives motivation enters the scene, and that the basic mechanisms of executive functions such as inhibition, the monitoring of progresses, and working memory, are required for this system to work. Further, we elaborate on the idea that the off-line re-enactment of anticipatory mechanisms used for action control gives rise to (embodied) mental simulations, and propose that thinking consists essentially in controlling mental simulations rather than directly controlling behavior and perceptions. We conclude by sketching an evolutionary perspective of this process, proposing that anticipation leveraged cognition, and by highlighting specific predictions of our model.

  7. Growth of finiteness in the third year of life: replication and predictive validity.

    PubMed

    Hadley, Pamela A; Rispoli, Matthew; Holt, Janet K; Fitzgerald, Colleen; Bahnsen, Alison

    2014-06-01

    The authors of this study investigated the validity of tense and agreement productivity (TAP) scoring in diverse sentence frames obtained during conversational language sampling as an alternative measure of finiteness for use with young children. Longitudinal language samples were used to model TAP growth from 21 to 30 months of age for 37 typically developing toddlers. Empirical Bayes (EB) linear and quadratic growth coefficients and child sex were then used to predict elicited grammar composite scores on the Test of Early Grammatical Impairment (TEGI; Rice & Wexler, 2001) at 36 months. A random-effects quadratic model with no intercept best characterized TAP growth, replicating the findings of Rispoli, Hadley, and Holt (2009). The combined regression model was significant, with the 3 variables accounting for 55.5% of the variance in the TEGI composite scores. These findings establish TAP growth as a valid metric of finiteness in the 3rd year of life. Developmental and theoretical implications are discussed.

  8. Interstellar matter in early-type galaxies. II - The relationship between gaseous components and galaxy types

    NASA Technical Reports Server (NTRS)

    Bregman, Joel N.; Hogg, David E.; Roberts, Morton S.

    1992-01-01

    Interstellar components of early-type galaxies are established by galactic type and luminosity in order to search for relationships between the different interstellar components and to test the predictions of theoretical models. Some of the data include observations of neutral hydrogen, carbon monoxide, and radio continuum emission. An alternative distance model which yields LX varies as LB sup 2.45, a relation which is in conflict with simple cooling flow models, is discussed. The dispersion of the X-ray luminosity about this regression line is unlikely to result from stripping. The striking lack of clear correlations between hot and cold interstellar components, taken together with their morphologies, suggests that the cold gas is a disk phenomenon while the hot gas is a bulge phenomenon, with little interaction between the two. The progression of galaxy type from E to Sa is not only a sequence of decreasing stellar bulge-to-disk ratio, but also of hot-to-cold-gas ratio.

  9. Interaction of the sonic boom with atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Rusak, Zvi; Cole, Julian D.

    1994-01-01

    Theoretical research was carried out to study the effect of free-stream turbulence on sonic boom pressure fields. A new transonic small-disturbance model to analyze the interactions of random disturbances with a weak shock was developed. The model equation has an extended form of the classic small-disturbance equation for unsteady transonic aerodynamics. An alternative approach shows that the pressure field may be described by an equation that has an extended form of the classic nonlinear acoustics equation that describes the propagation of sound beams with narrow angular spectrum. The model shows that diffraction effects, nonlinear steepening effects, focusing and caustic effects and random induced vorticity fluctuations interact simultaneously to determine the development of the shock wave in space and time and the pressure field behind it. A finite-difference algorithm to solve the mixed type elliptic-hyperbolic flows around the shock wave was also developed. Numerical calculations of shock wave interactions with various deterministic and random fluctuations will be presented in a future report.

  10. Infrared metamaterial by RF magnetron sputtered ZnO/Al:ZnO multilayers

    NASA Astrophysics Data System (ADS)

    Santiago, Kevin C.; Mundle, Rajeh; White, Curtis; Bahoura, Messaoud; Pradhan, Aswini K.

    2018-03-01

    Hyperbolic metamaterials create artificial anisotropy using metallic wires suspended in dielectric media or alternating layers of a metal and dielectric (Type I or Type II). In this study we fabricated ZnO/Al:ZnO (AZO) multilayers by the RF magnetron sputtering deposition technique. Our fabricated multilayers satisfy the requirements for a type II hyperbolic metamaterial. The optical response of individual AZO and ZnO films, as well as the multilayered film were investigated via UV-vis-IR transmittance and spectroscopic ellipsometry. The optical response of the multilayered system is calculated using the nonlocal-corrected Effective Medium Approximation (EMA). The spectroscopic ellipsometry data of the multilayered system was modeled using a uniaxial material model and EMA model. Both theoretical and experimental studies validate the fabricated multilayers undergo a hyperbolic transition at a wavelength of 2.2 μm. To our knowledge this is the first AZO/ZnO type II hyperbolic metamaterial system fabricated by magnetron sputtering deposition method.

  11. Cosmological Implications of the Electron-Positron Aether

    NASA Astrophysics Data System (ADS)

    Rothwarf, Allen

    1997-04-01

    An aether is not prohibited on theoretical nor experimental grounds; only a credible physical model for it is lacking.By assuming that the particles and anti-particles created during the "big-bang" origin of the universe have not annihilated one another, but instead, form a bound state plasma, we have a model for a real aether.This aether is dominated by electron-positron pairs at very high density(10**30/cm3),in close analogy with electron-hole droplets formed in laser irradiated semiconductors. The Fermi velocity of this plasma is the speed of light, and the plasma expands at this speed. This gives results for the expanding universe in agreement with the Einstein-deSitter result for a universe dominated by radiation.The speed of light varies with time as do the other fundamental constants.This leads to an alternate explanation for cosmological redshifts. Independent,mini big bangs can occur and account for observed anomalous redshifts. The model can be tested using LIGO apparatus.

  12. A novel color image encryption scheme using alternate chaotic mapping structure

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang

    2016-07-01

    This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.

  13. Evaluation of effectiveness of Er,Cr:YSGG laser for root canal disinfection: theoretical simulation of temperature elevations in root dentin.

    PubMed

    Zhu, L; Tolba, M; Arola, D; Salloum, M; Meza, F

    2009-07-01

    Erbium, chromium: yttrium, scandium, gallium, garnet (Er,Cr:YSGG) lasers are currently being investigated for disinfecting the root canal system. Prior to using laser therapy, it is important to understand the temperature distribution and to assess thermal damage to the surrounding tissue. In this study, a theoretical simulation using the Pennes bioheat equation is conducted to evaluate how heat spreads from the canal surface using an Er,Cr:YSGG laser. Results of the investigation show that some of the proposed treatment protocols for killing bacteria in the deep dentin are ineffective, even for long heating durations. Based on the simulation, an alternative treatment protocol is identified that has improved effectiveness and is less likely to introduce collateral damage to the surrounding tissue. The alternative protocol uses 350 mW laser power with repeating laser tip movement to achieve bacterial disinfection in the deep dentin (800 microm lateral from the canal surface), while avoiding thermal damage to the surrounding tissue (T<47 degrees C). The alternative treatment protocol has the potential to not only achieve bacterial disinfection of deep dentin but also shorten the treatment time, thereby minimizing potential patient discomfort during laser procedures.

  14. The Application of Piagetian and Neo-Piagetian Ideas to Further and Higher Education.

    ERIC Educational Resources Information Center

    Sutherland, Peter

    1999-01-01

    Outlines theoretical perspectives of neo-Piagetians (Kohlberg, Peel, Labouvie-Vief), synthesizers (Kolb, Biggs, Pascual-Leone), and alternative theorists (Perry, Gilligan). Considers their applicability to adults and the implications for adult and higher education. (SK)

  15. On Hierarchical Threshold Access Structures

    DTIC Science & Technology

    2010-11-01

    One of the recent generalizations of (t, n) secret sharing for hierarchical threshold access structures is given by Tassa, where he answers the...of theoretical background. We give a conceptually simpler alternative for the understanding of the realization of hierarchical threshold access

  16. Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.

    PubMed

    Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John

    2018-03-01

    Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.

  17. Abrupt climate change and extinction events

    NASA Technical Reports Server (NTRS)

    Crowley, Thomas J.

    1988-01-01

    There is a growing body of theoretical and empirical support for the concept of instabilities in the climate system, and indications that abrupt climate change may in some cases contribute to abrupt extinctions. Theoretical indications of instabilities can be found in a broad spectrum of climate models (energy balance models, a thermohaline model of deep-water circulation, atmospheric general circulation models, and coupled ocean-atmosphere models). Abrupt transitions can be of several types and affect the environment in different ways. There is increasing evidence for abrupt climate change in the geologic record and involves both interglacial-glacial scale transitions and the longer-term evolution of climate over the last 100 million years. Records from the Cenozoic clearly show that the long-term trend is characterized by numerous abrupt steps where the system appears to be rapidly moving to a new equilibrium state. The long-term trend probably is due to changes associated with plate tectonic processes, but the abrupt steps most likely reflect instabilities in the climate system as the slowly changing boundary conditions caused the climate to reach some threshold critical point. A more detailed analysis of abrupt steps comes from high-resolution studies of glacial-interglacial fluctuations in the Pleistocene. Comparison of climate transitions with the extinction record indicates that many climate and biotic transitions coincide. The Cretaceous-Tertiary extinction is not a candidate for an extinction event due to instabilities in the climate system. It is quite possible that more detailed comparisons and analysis will indicate some flaws in the climate instability-extinction hypothesis, but at present it appears to be a viable candidate as an alternate mechanism for causing abrupt environmental changes and extinctions.

  18. Widening higher education participation in rural communities in England: An anchor institution model

    NASA Astrophysics Data System (ADS)

    Elliott, Geoffrey

    2018-02-01

    Against a United Kingdom policy background of attempts to widen higher education participation in a socially inclusive direction, this article analyses theory, policy and practice to understand why past efforts have had limited success and to propose an alternative: an "anchor institution" model. A university and a private training provider were the principal partners in this venture, known as the South-West Partnership (pseudonym); the model was developed by them to meet the particular needs of mature female students who want and/or need to study part-time in a rural, coastal and isolated area of south-west England. While the concept of "anchor institutions" has previously been used in government social policy, and in higher education to promote knowledge transfer, it has not yet been adopted as a method for widening participation. The research study presented in this article investigated the effectiveness of the model in widening higher education participation in the context of the South-West Partnership. The study was conducted within an interpretivist theoretical framework. It accessed student voices to illustrate the character of education required to widen participation in vocational higher education by mature female students in rural communities, through semi-structured qualitative interviews on a range of topics identified from relevant theoretical literature, and by drawing on the research team's professional knowledge and experience. These topics included student aspirations and career destinations, motivations, access, learning experiences, and peer and tutor support. It is hoped the findings will inform the future development of adult vocational higher education provision in rural areas, where opportunities have been limited, and encourage further application of the anchor institution model for widening participation elsewhere.

  19. A review and evaluation of the internal structure and consistency of the Approaches to Teaching Inventory

    NASA Astrophysics Data System (ADS)

    Harshman, Jordan; Stains, Marilyne

    2017-05-01

    This study presents a review from 39 studies that provide evidence for the structural validity and internal consistency of the Approaches to Teaching Inventory (ATI). In addition to this review, we evaluate many alternative factor structures on a sample of 267 first- and second-year chemistry faculty members participating in a professional development, a sample of instructors for which the ATI was originally designed. A total of 26 unique factor structures were evaluated. Through robust checking of assumptions, compilations of existing evidence, and new exploratory and confirmatory analyses, we found that there is greater evidence for the structural validity and internal consistency for the 22-item ATI than the 16-item ATI. Additionally, evidence supporting the original two-factor and four-factor structures proposed by the ATI authors (focusing on information transmission and conceptual change) were not reproducible and while alternative models were empirically viable, more theoretical justification is warranted. Recommendations for ATI use and general comments regarding best practices of reporting psychometrics in educational research contexts are discussed.

  20. Education for change in a post-modern world: redefining revolution.

    PubMed

    Cameron, P; Willis, K; Crack, G

    1995-10-01

    Social and behavioural sciences are established components in the curriculum of undergraduate nursing degrees. The purpose is to introduce future and practising nurses to the social and political influences which inform their workplaces and practices. Inevitably an awareness of the structural barriers and the powerful political interests involved in health can lead to feelings of powerlessness and despair of achieving change. Yet the skills of critical analysis and political awareness developed in study such as this are essential for health workers in the increasingly complex and politically charged domain in which they work. This paper will explore problems and barriers encountered in development of curriculum and teaching social and behavioural sciences in health. It will propose an alternative conceptual model, based on post-structuralism, as one way of addressing these barriers. This approach shifts the focus from meta-theoretical sociological concepts such as class, gender and culture, to one of examining subject positions, discourse, contestation and local action, thus enabling the exploration and development of possibilities for change. The paper will also provide a case study to illustrate this alternative approach.

Top