Sample records for randomized multifactor equation

  1. Probabilistic lifetime strength of aerospace materials via computational simulation

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Keating, Jerome P.; Lovelace, Thomas B.; Bast, Callie C.

    1991-01-01

    The results of a second year effort of a research program are presented. The research included development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic phenomenological constitutive relationship, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects of primitive variables. These primitive variables often originate in the environment and may include stress from loading, temperature, chemical, or radiation attack. This multifactor interaction constitutive equation is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the constitutive equation using actual experimental materials data together with the multiple linear regression of that data.

  2. Computational simulation of coupled material degradation processes for probabilistic lifetime strength of aerospace materials

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Bast, Callie C.

    1992-01-01

    The research included ongoing development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects or primative variables. These primative variable may include high temperature, fatigue or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation has been randomized and is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the above described constitutive equation using actual experimental materials data together with linear regression of that data, thereby predicting values for the empirical material constraints for each effect or primative variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from the open literature for materials typically of interest to those studying aerospace propulsion system components. Material data for Inconel 718 was analyzed using the developed methodology.

  3. Computational simulation of probabilistic lifetime strength for aerospace materials subjected to high temperature, mechanical fatigue, creep and thermal fatigue

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Bast, Callie C.; Trimble, Greg A.

    1992-01-01

    This report presents the results of a fourth year effort of a research program, conducted for NASA-LeRC by the University of Texas at San Antonio (UTSA). The research included on-going development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subject to a number of effects or primitive variables. These primitive variables may include high temperature, fatigue or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation has been randomized and is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the above-described constitutive equation using actual experimental materials data together with regression analysis of that data, thereby predicting values for the empirical material constants for each effect or primitive variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from industry and the open literature for materials typically for applications in aerospace propulsion system components. Material data for Inconel 718 has been analyzed using the developed methodology.

  4. Computational simulation of probabilistic lifetime strength for aerospace materials subjected to high temperature, mechanical fatigue, creep, and thermal fatigue

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Bast, Callie C.; Trimble, Greg A.

    1992-01-01

    The results of a fourth year effort of a research program conducted for NASA-LeRC by The University of Texas at San Antonio (UTSA) are presented. The research included on-going development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects or primitive variables. These primitive variables may include high temperature, fatigue, or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation was randomized and is included in the computer program, PROMISC. Also included in the research is the development of methodology to calibrate the above-described constitutive equation using actual experimental materials data together with regression analysis of that data, thereby predicting values for the empirical material constants for each effect or primitive variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from industry and the open literature for materials typically for applications in aerospace propulsion system components. Material data for Inconel 718 was analyzed using the developed methodology.

  5. Probabilistic Material Strength Degradation Model for Inconel 718 Components Subjected to High Temperature, High-Cycle and Low-Cycle Mechanical Fatigue, Creep and Thermal Fatigue Effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie C.; Boyce, Lola

    1995-01-01

    The development of methodology for a probabilistic material strength degradation is described. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing predictions of high-cycle mechanical fatigue and high temperature effects with experiments are presented. Results from this limited verification study strongly supported that material degradation can be represented by randomized multifactor interaction models.

  6. Fatigue failure of materials under broad band random vibrations

    NASA Technical Reports Server (NTRS)

    Huang, T. C.; Lanz, R. W.

    1971-01-01

    The fatigue life of material under multifactor influence of broad band random excitations has been investigated. Parameters which affect the fatigue life are postulated to be peak stress, variance of stress and the natural frequency of the system. Experimental data were processed by the hybrid computer. Based on the experimental results and regression analysis a best predicting model has been found. All values of the experimental fatigue lives are within the 95% confidence intervals of the predicting equation.

  7. On the Interface of Probabilistic and PDE Methods in a Multifactor Term Structure Theory

    ERIC Educational Resources Information Center

    Mamon, Rogemar S.

    2004-01-01

    Within the general framework of a multifactor term structure model, the fundamental partial differential equation (PDE) satisfied by a default-free zero-coupon bond price is derived via a martingale-oriented approach. Using this PDE, a result characterizing a model belonging to an exponential affine class is established using only a system of…

  8. Bullying among adolescents in North Cyprus and Turkey: testing a multifactor model.

    PubMed

    Bayraktar, Fatih

    2012-04-01

    Peer bullying has been studied since the 1970s. Therefore, a vast literature has accumulated about the various predictors of bullying. However, to date there has been no study which has combined individual-, peer-, parental-, teacher-, and school-related predictors of bullying within a model. In this sense, the main aim of this study was to test a multifactor model of bullying among adolescents in North Cyprus and Turkey. A total of 1,052 adolescents (554 girls, 498 boys) aged between 13 and 18 (M = 14.7, SD = 1.17) were recruited from North Cyprus and Turkey. Before testing the multifactor models, the measurement models were tested according to structural equation modeling propositions. Both models indicated that the psychological climate of the school, teacher attitudes within classroom, peer relationships, parental acceptance-rejection, and individual social competence factors had significant direct effects on bullying behaviors. Goodness-of-fit indexes indicated that the proposed multifactor model fitted both data well. The strongest predictors of bullying were the psychological climate of the school following individual social competence factors and teacher attitudes within classroom in both samples. All of the latent variables explained 44% and 51% of the variance in bullying in North Cyprus and Turkey, respectively.

  9. Probabilistic Multi-Factor Interaction Model for Complex Material Behavior

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2010-01-01

    Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required to represent the ejected foam. The exponents were evaluated by least squares method from experimental data. The equation is used and it can represent multiple factors in other problems as well; for example, evaluation of fatigue life, creep life, fracture toughness, and structural fracture, as well as optimization functions. The software is rather simplistic. Required inputs are initial value, final value, and an exponent for each factor. The number of factors is open-ended. The value is updated as each factor is evaluated. If a factor goes to zero, the previous value is used in the evaluation.

  10. Simulation of probabilistic wind loads and building analysis

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Chamis, Christos C.

    1991-01-01

    Probabilistic wind loads likely to occur on a structure during its design life are predicted. Described here is a suitable multifactor interactive equation (MFIE) model and its use in the Composite Load Spectra (CLS) computer program to simulate the wind pressure cumulative distribution functions on four sides of a building. The simulated probabilistic wind pressure load was applied to a building frame, and cumulative distribution functions of sway displacements and reliability against overturning were obtained using NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), a stochastic finite element computer code. The geometry of the building and the properties of building members were also considered as random in the NESSUS analysis. The uncertainties of wind pressure, building geometry, and member section property were qualified in terms of their respective sensitivities on the structural response.

  11. Efficacy and safety of a multifactor intervention to improve therapeutic adherence in patients with chronic obstructive pulmonary disease (COPD): protocol for the ICEPOC study.

    PubMed

    Barnestein-Fonseca, Pilar; Leiva-Fernández, José; Vidal-España, Francisca; García-Ruiz, Antonio; Prados-Torres, Daniel; Leiva-Fernández, Francisca

    2011-02-14

    Low therapeutic adherence to medication is very common. Clinical effectiveness is related to dose rate and route of administration and so poor therapeutic adherence can reduce the clinical benefit of treatment. The therapeutic adherence of patients with chronic obstructive pulmonary disease (COPD) is extremely poor according to most studies. The research about COPD adherence has mainly focussed on quantifying its effect, and few studies have researched factors that affect non-adherence. Our study will evaluate the effectiveness of a multifactor intervention to improve the therapeutic adherence of COPD patients. A randomized controlled clinical trial with 140 COPD diagnosed patients selected by a non-probabilistic method of sampling. Subjects will be randomly allocated into two groups, using the block randomization technique. Every patient in each group will be visited four times during the year of the study. Motivational aspects related to adherence (beliefs and behaviour): group and individual interviews; cognitive aspects: information about illness; skills: inhaled technique training. Reinforcement of the cognitive-emotional aspects and inhaled technique training will be carried out in all visits of the intervention group. Adherence to a prescribed treatment involves a behavioural change. Cognitive, emotional and motivational aspects influence this change and so we consider the best intervention procedure to improve adherence would be a cognitive and emotional strategy which could be applied in daily clinical practice. Our hypothesis is that the application of a multifactor intervention (COPD information, dose reminders and reinforcing audiovisual material, motivational aspects and inhalation technique training) to COPD patients taking inhaled treatment will give a 25% increase in the number of patients showing therapeutic adherence in this group compared to the control group.We will evaluate the effectiveness of this multifactor intervention on patient adherence to inhaled drugs considering that it will be right and feasible to the clinical practice context. Current Controlled Trials ISRCTN18841601.

  12. An overview of a multifactor-system theory of personality and individual differences: III. Life span development and the heredity-environment issue.

    PubMed

    Powell, A; Royce, J R

    1981-12-01

    In Part III of this three-part series on multifactor-system theory, multivariate, life-span development is approached from the standpoint of a quantitative and qualitative analysis of the ontogenesis of factors in each of the six systems. The pattern of quantitative development (described via the Gompertz equation and three developmental parameters) involves growth, stability, and decline, and qualitative development involves changes in the organization of factors (e.g., factor differentiation and convergence). Hereditary and environmental sources of variation are analyzed via the factor gene model and the concept of heredity-dominant factors, and the factor-learning model and environment-dominant factors. It is hypothesized that the sensory and motor systems are heredity dominant, that the style and value systems are environment dominant, and that the cognitive and affective systems are partially heredity dominant.

  13. Teacher Perceptions of Principals' Leadership Qualities: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Hauserman, Cal P.; Ivankova, Nataliya V.; Stick, Sheldon L.

    2013-01-01

    This mixed methods sequential explanatory study utilized the Multi-factor Leadership Questionnaire, responses to open-ended questions, and in-depth interviews to identify transformational leadership qualities that were present among principals in Alberta, Canada. The first quantitative phase consisted of a random sample of 135 schools (with…

  14. Probabilistic sizing of laminates with uncertainties

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Liaw, D. G.; Chamis, C. C.

    1993-01-01

    A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.

  15. Probabilistic evaluation of uncertainties and risks in aerospace components

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.

    1992-01-01

    This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.

  16. [Chemical and sensory characterization of tea (Thea sinensis) consumed in Chile].

    PubMed

    Wittig de Penna, Emma; José Zúñiga, María; Fuenzalida, Regina; López-Planes, Reinaldo

    2005-03-01

    By means of descriptive analysis four varieties of tea (Thea sinensis) were assesed: Argentinean OP (orange pekoe) tea (black), Brazilian OP tea (black), Ceylan OP tea (black) and Darjeeling OP tea (green). The appearance of dry tea leaves were qualitatively characterized comparing with dry leaves standard. The attributes: colour, form, regularity of the leaves, fibre and stem cutting were evaluated The differences obtained were related to the differences produced by the effect of the fermentation process. Flavour and aroma descriptors of the tea liqueur were generated by a trained panel. Colour and astringency were evaluated in comparison with qualified standards using non structured linear scales. In order to relate the sensory analysis and the chemical composition for the different varieties of tea, following determinations were made: chemical moisture, dry material, aqueous extract, tannin and caffeine. Through multifactor regression analysis the equations in relation to the following chemical parameters were determined. Dry material, aqueous extract and tannins for colour and moisture, dry material and aqueous extract for astringency, respectively. Statistical analysis through ANOVA (3 variation sources: samples, judges and replications) showed for samples four significant different groups for astringency and three different groups for colour. No significant differences between judges or repetitions were found. By multifactor regression analysis of both, colour and astringency, on their dependence of chemist results were calculated in order to asses the corresponding equations.

  17. Prediction of passenger ride quality in a multifactor environment

    NASA Technical Reports Server (NTRS)

    Dempsey, T. K.; Leatherwood, J. D.

    1976-01-01

    A model being developed, permits the understanding and prediction of passenger discomfort in a multifactor environment with particular emphasis upon combined noise and vibration. The model has general applicability to diverse transportation systems and provides a means of developing ride quality design criteria as well as a diagnostic tool for identifying the vibration and/or noise stimuli causing discomfort. Presented are: (1) a review of the basic theoretical and mathematical computations associated with the model, (2) a discussion of methodological and criteria investigations for both the vertical and roll axes of vibration, (3) a description of within-axis masking of discomfort responses for the vertical axis, thereby allowing prediction of the total discomfort due to any random vertical vibration, (4) a discussion of initial data on between-axis masking, and (5) discussion of a study directed towards extension of the vibration model to the more general case of predicting ride quality in the combined noise and vibration environments.

  18. Estimation and analysis of multifactor productivity in truck transportation : 1987 - 2003

    DOT National Transportation Integrated Search

    2009-02-01

    The analysis has three objectives: 1) to estimate multifactor : productivity (MFP) in truck transportation during : 1987-2003; 2) to examine changes in multifactor productivity : in U.S. truck transportation, over time, and : to compare these changes...

  19. Multi-factor challenge/response approach for remote biometric authentication

    NASA Astrophysics Data System (ADS)

    Al-Assam, Hisham; Jassim, Sabah A.

    2011-06-01

    Although biometric authentication is perceived to be more reliable than traditional authentication schemes, it becomes vulnerable to many attacks when it comes to remote authentication over open networks and raises serious privacy concerns. This paper proposes a biometric-based challenge-response approach to be used for remote authentication between two parties A and B over open networks. In the proposed approach, a remote authenticator system B (e.g. a bank) challenges its client A who wants to authenticate his/her self to the system by sending a one-time public random challenge. The client A responds by employing the random challenge along with secret information obtained from a password and a token to produce a one-time cancellable representation of his freshly captured biometric sample. The one-time biometric representation, which is based on multi-factor, is then sent back to B for matching. Here, we argue that eavesdropping of the one-time random challenge and/or the resulting one-time biometric representation does not compromise the security of the system, and no information about the original biometric data is leaked. In addition to securing biometric templates, the proposed protocol offers a practical solution for the replay attack on biometric systems. Moreover, we propose a new scheme for generating a password-based pseudo random numbers/permutation to be used as a building block in the proposed approach. The proposed scheme is also designed to provide protection against repudiation. We illustrate the viability and effectiveness of the proposed approach by experimental results based on two biometric modalities: fingerprint and face biometrics.

  20. A Guide to the Multifactored Evaluation (MFE).

    ERIC Educational Resources Information Center

    Ohio Coalition for the Education of Children with Disabilities, Marion.

    This guide provides Ohio parents of children with disabilities with information on multifactored evaluations. It begins by discussing the Intervention Assistance Team and what occurs at the assistance team meeting. It also explains that to begin the multifactored evaluation process, the parent must complete a "Request for Parent Consent for…

  1. Research on accuracy analysis of laser transmission system based on Zemax and Matlab

    NASA Astrophysics Data System (ADS)

    Chen, Haiping; Liu, Changchun; Ye, Haixian; Xiong, Zhao; Cao, Tingfen

    2017-05-01

    Laser transmission system is important in high power solid-state laser facilities and its function is to transfer and focus the light beam in accordance with the physical function of the facility. This system is mainly composed of transmission mirror modules and wedge lens module. In order to realize the precision alignment of the system, the precision alignment of the system is required to be decomposed into the allowable range of the calibration error of each module. The traditional method is to analyze the error factors of the modules separately, and then the linear synthesis is carried out, and the influence of the multi-module and multi-factor is obtained. In order to analyze the effect of the alignment error of each module on the beam center and focus more accurately, this paper aims to combine with the Monte Carlo random test and ray tracing, analyze influence of multi-module and multi-factor on the center of the beam, and evaluate and optimize the results of accuracy decomposition.

  2. Multi-factor authentication using quantum communication

    DOEpatents

    Hughes, Richard John; Peterson, Charles Glen; Thrasher, James T.; Nordholt, Jane E.; Yard, Jon T.; Newell, Raymond Thorson; Somma, Rolando D.

    2018-02-06

    Multi-factor authentication using quantum communication ("QC") includes stages for enrollment and identification. For example, a user enrolls for multi-factor authentication that uses QC with a trusted authority. The trusted authority transmits device factor information associated with a user device (such as a hash function) and user factor information associated with the user (such as an encrypted version of a user password). The user device receives and stores the device factor information and user factor information. For multi-factor authentication that uses QC, the user device retrieves its stored device factor information and user factor information, then transmits the user factor information to the trusted authority, which also retrieves its stored device factor information. The user device and trusted authority use the device factor information and user factor information (more specifically, information such as a user password that is the basis of the user factor information) in multi-factor authentication that uses QC.

  3. The Four-Factor Model of Depressive Symptoms in Dementia Caregivers: A Structural Equation Model of Ethnic Differences

    PubMed Central

    Roth, David L.; Ackerman, Michelle L.; Okonkwo, Ozioma C.; Burgio, Louis D.

    2008-01-01

    Previous studies have suggested that 4 latent constructs (depressed affect, well-being, interpersonal problems, somatic symptoms) underlie the item responses on the Center for Epidemiological Studies Depression (CES-D) Scale. This instrument has been widely used in dementia caregiving research, but the fit of this multifactor model and the explanatory contributions of multifactor models have not been sufficiently examined for caregiving samples. The authors subjected CES-D data (N = 1,183) from the initial Resources for Enhancing Alzheimer’s Caregiver Health Study to confirmatory factor analysis methods and found that the 4-factor model provided excellent fit to the observed data. Invariance analyses suggested only minimal item-loading differences across race subgroups and supported the validity of race comparisons on the latent factors. Significant race differences were found on 3 of the 4 latent factors both before and after controlling for demographic covariates. African Americans reported less depressed affect and better well-being than White caregivers, who reported better well-being and fewer interpersonal problems than Hispanic caregivers. These findings clarify and extend previous studies of race differences in depression among diverse samples of dementia caregivers. PMID:18808246

  4. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  5. Effect of five enological practices and of the general phenolic composition on fermentation-related aroma compounds in Mencia young red wines.

    PubMed

    Añón, Ana; López, Jorge F; Hernando, Diego; Orriols, Ignacio; Revilla, Eugenio; Losada, Manuel M

    2014-04-01

    The effects of five technological procedures and of the contents of total anthocyanins and condensed tannins on 19 fermentation-related aroma compounds of young red Mencia wines were studied. Multifactor ANOVA revealed that levels of those volatiles changed significantly over the length of storage in bottles and, to a lesser extent, due to other technological factors considered; total anthocyanins and condensed tannins also changed significantly as a result of the five practices assayed. Five aroma compounds possessed an odour activity value >1 in all wines, and another four in some wines. Linear correlation among volatile compounds and general phenolic composition revealed that total anthocyanins were highly related to 14 different aroma compounds. Multifactor ANOVA, considering the content of total anthocyanins as a sixth random factor, revealed that this parameter affected significantly the contents of ethyl lactate, ethyl isovalerate, 1-pentanol and ethyl octanoate. Thus, the aroma of young red Mencia wines may be affected by levels of total anthocyanins. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A primer on multifactor productivity : description, benefits, and uses

    DOT National Transportation Integrated Search

    2008-04-01

    This primer presents a description of multifactor : productivity (MFP) and its calculation. Productivity : is an important measure of the state of the : economy at various levels: firm, industry, sectoral, : and the macroeconomic. The method describe...

  7. Long-duration effect of multi-factor stresses on the cellular biochemistry, oil-yielding performance and morphology of Nannochloropsis oculata

    PubMed Central

    Wei, Likun; Huang, Xuxiong

    2017-01-01

    Microalga Nannochloropsis oculata is a promising alternative feedstock for biodiesel. Elevating its oil-yielding capacity is conducive to cost-saving biodiesel production. However, the regulatory processes of multi-factor collaborative stresses (MFCS) on the oil-yielding performance of N. oculata are unclear. The duration effects of MFCS (high irradiation, nitrogen deficiency and elevated iron supplementation) on N. oculata were investigated in an 18-d batch culture. Despite the reduction in cell division, the biomass concentration increased, resulting from the large accumulation of the carbon/energy-reservoir. However, different storage forms were found in different cellular storage compounds, and both the protein content and pigment composition swiftly and drastically changed. The analysis of four biodiesel properties using pertinent empirical equations indicated their progressive effective improvement in lipid classes and fatty acid composition. The variation curve of neutral lipid productivity was monitored with fluorescent Nile red and was closely correlated to the results from conventional methods. In addition, a series of changes in the organelles (e.g., chloroplast, lipid body and vacuole) and cell shape, dependent on the stress duration, were observed by TEM and LSCM. These changes presumably played an important role in the acclimation of N. oculata to MFCS and accordingly improved its oil-yielding performance. PMID:28346505

  8. An analysis of labor and multifactor productivity in air transportation : 1990 - 2001

    DOT National Transportation Integrated Search

    2002-01-01

    The analysis has two main objectives: 1) to examine : labor productivity and multifactor productivity : (MFP) in U.S. air transportation during the 1990 : to 2001 period and to compare these measures to : those of two other transportation subsectors ...

  9. Probabilistic Material Strength Degradation Model for Inconel 718 Components Subjected to High Temperature, Mechanical Fatigue, Creep and Thermal Fatigue Effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie Corinne Scheidt

    1994-01-01

    This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.

  10. A comparison of linear and nonlinear statistical techniques in performance attribution.

    PubMed

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  11. Multifactor Screener in the 2000 National Health Interview Survey Cancer Control Supplement: Validation Results

    Cancer.gov

    Risk Factor Assessment Branch (RFAB) staff have assessed the validity of the Multifactor Screener in several studies: NCI's Observing Protein and Energy (OPEN) Study, the Eating at America's Table Study (EATS), and the joint NIH-AARP Diet and Health Study.

  12. Multifactor Screener in the 2000 National Health Interview Survey Cancer Control Supplement: Overview

    Cancer.gov

    The Multifactor Screener may be useful to assess approximate intakes of fruits and vegetables, percentage energy from fat, and fiber. The screener asks respondents to report how frequently they consume foods in 16 categories. The screener also asks one question about the type of milk consumed.

  13. Multifactor Screener in the 2000 National Health Interview Survey Cancer Control Supplement: Uses of Screener Estimates

    Cancer.gov

    Dietary intake estimates derived from the Multifactor Screener are rough estimates of usual intake of fruits and vegetables, fiber, calcium, servings of dairy, and added sugar. These estimates are not as accurate as those from more detailed methods (e.g., 24-hour recalls).

  14. Probabilistic simulation of the human factor in structural reliability

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1993-01-01

    A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.

  15. Probabilistic simulation of the human factor in structural reliability

    NASA Astrophysics Data System (ADS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-09-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  16. Probabilistic Simulation of the Human Factor in Structural Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-01-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  17. Cautionary Note on Reporting Eta-Squared Values from Multifactor ANOVA Designs

    ERIC Educational Resources Information Center

    Pierce, Charles A.; Block, Richard A.; Aguinis, Herman

    2004-01-01

    The authors provide a cautionary note on reporting accurate eta-squared values from multifactor analysis of variance (ANOVA) designs. They reinforce the distinction between classical and partial eta-squared as measures of strength of association. They provide examples from articles published in premier psychology journals in which the authors…

  18. Forest ecosystems of a Lower Gulf Coastal Plainlandscape: multifactor classification and analysis

    Treesearch

    P. Charles Goebel; Brian J. Palik; L. Katherine Kirkman; Mark B. Drew; Larry West; Dee C. Pederson

    2001-01-01

    The most common forestland classification techniques applied in the southeastern United States are vegetation-based. While not completely ignored, the application of multifactor, hierarchical ecosystem classifications are limited despite their widespread use in other regions of the eastern United States. We present one of the few truly integrated ecosystem...

  19. A Multifactor Ecosystem Assessment of Wetlands Created Using a Novel Dredged Material Placement Technique in the Atchafalaya River, Louisiana: An Engineering With Nature Demonstration Project

    DTIC Science & Technology

    functions. The strategic placement of dredged materials in locations that mimic natural process promoted additional ecological benefits, especially...regarding wading bird and infaunal habitat, thus adhering to Engineering With Nature (EWN) processes. The multifactor approach improved the wetland

  20. Sensitivity Analysis of Mechanical Parameters of Different Rock Layers to the Stability of Coal Roadway in Soft Rock Strata

    PubMed Central

    Zhao, Zeng-hui; Wang, Wei-ming; Gao, Xin; Yan, Ji-xing

    2013-01-01

    According to the geological characteristics of Xinjiang Ili mine in western area of China, a physical model of interstratified strata composed of soft rock and hard coal seam was established. Selecting the tunnel position, deformation modulus, and strength parameters of each layer as influencing factors, the sensitivity coefficient of roadway deformation to each parameter was firstly analyzed based on a Mohr-Columb strain softening model and nonlinear elastic-plastic finite element analysis. Then the effect laws of influencing factors which showed high sensitivity were further discussed. Finally, a regression model for the relationship between roadway displacements and multifactors was obtained by equivalent linear regression under multiple factors. The results show that the roadway deformation is highly sensitive to the depth of coal seam under the floor which should be considered in the layout of coal roadway; deformation modulus and strength of coal seam and floor have a great influence on the global stability of tunnel; on the contrary, roadway deformation is not sensitive to the mechanical parameters of soft roof; roadway deformation under random combinations of multi-factors can be deduced by the regression model. These conclusions provide theoretical significance to the arrangement and stability maintenance of coal roadway. PMID:24459447

  1. Multifactor valuation models of energy futures and options on futures

    NASA Astrophysics Data System (ADS)

    Bertus, Mark J.

    The intent of this dissertation is to investigate continuous time pricing models for commodity derivative contracts that consider mean reversion. The motivation for pricing commodity futures and option on futures contracts leads to improved practical risk management techniques in markets where uncertainty is increasing. In the dissertation closed-form solutions to mean reverting one-factor, two-factor, three-factor Brownian motions are developed for futures contracts. These solutions are obtained through risk neutral pricing methods that yield tractable expressions for futures prices, which are linear in the state variables, hence making them attractive for estimation. These functions, however, are expressed in terms of latent variables (i.e. spot prices, convenience yield) which complicate the estimation of the futures pricing equation. To address this complication a discussion on Dynamic factor analysis is given. This procedure documents latent variables using a Kalman filter and illustrations show how this technique may be used for the analysis. In addition, to the futures contracts closed form solutions for two option models are obtained. Solutions to the one- and two-factor models are tailored solutions of the Black-Scholes pricing model. Furthermore, since these contracts are written on the futures contracts, they too are influenced by the same underlying parameters of the state variables used to price the futures contracts. To conclude, the analysis finishes with an investigation of commodity futures options that incorporate random discrete jumps.

  2. Measurement Invariance of Second-Order Factor Model of the Multifactor Leadership Questionnaire (MLQ) across K-12 Principal Gender

    ERIC Educational Resources Information Center

    Xu, Lihua; Wubbena, Zane; Stewart, Trae

    2016-01-01

    Purpose: The purpose of this paper is to investigate the factor structure and the measurement invariance of the Multifactor Leadership Questionnaire (MLQ) across gender of K-12 school principals (n=6,317) in the USA. Design/methodology/approach: Nine first-order factor models and four second-order factor models were tested using confirmatory…

  3. Potential barriers to the application of multi-factor portfolio analysis in public hospitals: evidence from a pilot study in the Netherlands.

    PubMed

    Pavlova, Milena; Tsiachristas, Apostolos; Vermaeten, Gerhard; Groot, Wim

    2009-01-01

    Portfolio analysis is a business management tool that can assist health care managers to develop new organizational strategies. The application of portfolio analysis to US hospital settings has been frequently reported. In Europe however, the application of this technique has received little attention, especially concerning public hospitals. Therefore, this paper examines the peculiarities of portfolio analysis and its applicability to the strategic management of European public hospitals. The analysis is based on a pilot application of a multi-factor portfolio analysis in a Dutch university hospital. The nature of portfolio analysis and the steps in a multi-factor portfolio analysis are reviewed along with the characteristics of the research setting. Based on these data, a multi-factor portfolio model is developed and operationalized. The portfolio model is applied in a pilot investigation to analyze the market attractiveness and hospital strengths with regard to the provision of three orthopedic services: knee surgery, hip surgery, and arthroscopy. The pilot portfolio analysis is discussed to draw conclusions about potential barriers to the overall adoption of portfolio analysis in the management of a public hospital. Copyright (c) 2008 John Wiley & Sons, Ltd.

  4. Probabilistic Multi-Factor Interaction Model for Complex Material Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2008-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points, the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  5. Probabilistic Multi-Factor Interaction Model for Complex Material Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2008-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  6. Multi-factor authentication

    DOEpatents

    Hamlet, Jason R; Pierson, Lyndon G

    2014-10-21

    Detection and deterrence of spoofing of user authentication may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a user of the hardware device. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generates a PUF value. Combining logic is coupled to receive the PUF value, combines the PUF value with one or more other authentication factors to generate a multi-factor authentication value. A key generator is coupled to generate a private key and a public key based on the multi-factor authentication value while a decryptor is coupled to receive an authentication challenge posed to the hardware device and encrypted with the public key and coupled to output a response to the authentication challenge decrypted with the private key.

  7. A Unique Computational Algorithm to Simulate Probabilistic Multi-Factor Interaction Model Complex Material Point Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2010-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points--the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  8. The Research of Regression Method for Forecasting Monthly Electricity Sales Considering Coupled Multi-factor

    NASA Astrophysics Data System (ADS)

    Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui

    2018-01-01

    The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.

  9. Biometric Data Safeguarding Technologies Analysis and Best Practices

    DTIC Science & Technology

    2011-12-01

    fuzzy vault” scheme proposed by Juels and Sudan. The scheme was designed to encrypt data such that it could be unlocked by similar but inexact matches... designed transform functions. Multifactor Key Generation Multifactor key generation combines a biometric with one or more other inputs, such as a...cooperative, off-angle iris images.  Since the commercialized system is designed for images acquired from a specific, paired acquisition system

  10. Application of GA-SVM method with parameter optimization for landslide development prediction

    NASA Astrophysics Data System (ADS)

    Li, X. Z.; Kong, J. M.

    2013-10-01

    Prediction of landslide development process is always a hot issue in landslide research. So far, many methods for landslide displacement series prediction have been proposed. Support vector machine (SVM) has been proved to be a novel algorithm with good performance. However, the performance strongly depends on the right selection of the parameters (C and γ) of SVM model. In this study, we presented an application of GA-SVM method with parameter optimization in landslide displacement rate prediction. We selected a typical large-scale landslide in some hydro - electrical engineering area of Southwest China as a case. On the basis of analyzing the basic characteristics and monitoring data of the landslide, a single-factor GA-SVM model and a multi-factor GA-SVM model of the landslide were built. Moreover, the models were compared with single-factor and multi-factor SVM models of the landslide. The results show that, the four models have high prediction accuracies, but the accuracies of GA-SVM models are slightly higher than those of SVM models and the accuracies of multi-factor models are slightly higher than those of single-factor models for the landslide prediction. The accuracy of the multi-factor GA-SVM models is the highest, with the smallest RSME of 0.0009 and the biggest RI of 0.9992.

  11. Multi-factor evaluation indicator method for the risk assessment of atmospheric and oceanic hazard group due to the attack of tropical cyclones

    NASA Astrophysics Data System (ADS)

    Qi, Peng; Du, Mei

    2018-06-01

    China's southeast coastal areas frequently suffer from storm surge due to the attack of tropical cyclones (TCs) every year. Hazards induced by TCs are complex, such as strong wind, huge waves, storm surge, heavy rain, floods, and so on. The atmospheric and oceanic hazards cause serious disasters and substantial economic losses. This paper, from the perspective of hazard group, sets up a multi-factor evaluation method for the risk assessment of TC hazards using historical extreme data of concerned atmospheric and oceanic elements. Based on the natural hazard dynamic process, the multi-factor indicator system is composed of nine natural hazard factors representing intensity and frequency, respectively. Contributing to the indicator system, in order of importance, are maximum wind speed by TCs, attack frequency of TCs, maximum surge height, maximum wave height, frequency of gusts ≥ Scale 8, rainstorm intensity, maximum tidal range, rainstorm frequency, then sea-level rising rate. The first four factors are the most important, whose weights exceed 10% in the indicator system. With normalization processing, all the single-hazard factors are superposed by multiplying their weights to generate a superposed TC hazard. The multi-factor evaluation indicator method was applied to the risk assessment of typhoon-induced atmospheric and oceanic hazard group in typhoon-prone southeast coastal cities of China.

  12. Security enhanced multi-factor biometric authentication scheme using bio-hash function.

    PubMed

    Choi, Younsung; Lee, Youngsook; Moon, Jongho; Won, Dongho

    2017-01-01

    With the rapid development of personal information and wireless communication technology, user authentication schemes have been crucial to ensure that wireless communications are secure. As such, various authentication schemes with multi-factor authentication have been proposed to improve the security of electronic communications. Multi-factor authentication involves the use of passwords, smart cards, and various biometrics to provide users with the utmost privacy and data protection. Cao and Ge analyzed various authentication schemes and found that Younghwa An's scheme was susceptible to a replay attack where an adversary masquerades as a legal server and a user masquerading attack where user anonymity is not provided, allowing an adversary to execute a password change process by intercepting the user's ID during login. Cao and Ge improved upon Younghwa An's scheme, but various security problems remained. This study demonstrates that Cao and Ge's scheme is susceptible to a biometric recognition error, slow wrong password detection, off-line password attack, user impersonation attack, ID guessing attack, a DoS attack, and that their scheme cannot provide session key agreement. Then, to address all weaknesses identified in Cao and Ge's scheme, this study proposes a security enhanced multi-factor biometric authentication scheme and provides a security analysis and formal analysis using Burrows-Abadi-Needham logic. Finally, the efficiency analysis reveals that the proposed scheme can protect against several possible types of attacks with only a slightly high computational cost.

  13. Probabilistic Usage of the Multi-Factor Interaction Model

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A Multi-Factor Interaction Model (MFIM) is used to predict the insulating foam mass expulsion during the ascending of a space vehicle. The exponents in the MFIM are evaluated by an available approach which consists of least squares and an optimization algorithm. These results were subsequently used to probabilistically evaluate the effects of the uncertainties in each participating factor in the mass expulsion. The probabilistic results show that the surface temperature dominates at high probabilities and the pressure which causes the mass expulsion at low probabil

  14. Security enhanced multi-factor biometric authentication scheme using bio-hash function

    PubMed Central

    Lee, Youngsook; Moon, Jongho

    2017-01-01

    With the rapid development of personal information and wireless communication technology, user authentication schemes have been crucial to ensure that wireless communications are secure. As such, various authentication schemes with multi-factor authentication have been proposed to improve the security of electronic communications. Multi-factor authentication involves the use of passwords, smart cards, and various biometrics to provide users with the utmost privacy and data protection. Cao and Ge analyzed various authentication schemes and found that Younghwa An’s scheme was susceptible to a replay attack where an adversary masquerades as a legal server and a user masquerading attack where user anonymity is not provided, allowing an adversary to execute a password change process by intercepting the user’s ID during login. Cao and Ge improved upon Younghwa An’s scheme, but various security problems remained. This study demonstrates that Cao and Ge’s scheme is susceptible to a biometric recognition error, slow wrong password detection, off-line password attack, user impersonation attack, ID guessing attack, a DoS attack, and that their scheme cannot provide session key agreement. Then, to address all weaknesses identified in Cao and Ge’s scheme, this study proposes a security enhanced multi-factor biometric authentication scheme and provides a security analysis and formal analysis using Burrows-Abadi-Needham logic. Finally, the efficiency analysis reveals that the proposed scheme can protect against several possible types of attacks with only a slightly high computational cost. PMID:28459867

  15. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO2 Enrichment experiment

    NASA Astrophysics Data System (ADS)

    De Kauwe, M. G.; Medlyn, B.; Walker, A.; Zaehle, S.; Pendall, E.; Norby, R. J.

    2017-12-01

    Multifactor experiments are often advocated as important for advancing models, yet to date, such models have only been tested against single-factor experiments. We applied 10 models to the multifactor Prairie Heating and CO2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions: comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO2-induced water savings to extend the growing season length. Observed interactive (CO2 × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.

  16. A graph-based approach to inequality assessment

    NASA Astrophysics Data System (ADS)

    Palestini, Arsen; Pignataro, Giuseppe

    2016-08-01

    In a population consisting of heterogeneous types, whose income factors are indicated by nonnegative vectors, policies aggregating different factors can be represented by coalitions in a cooperative game, whose characteristic function is a multi-factor inequality index. When it is not possible to form all coalitions, the feasible ones can be indicated by a graph. We redefine Shapley and Banzhaf values on graph games to deduce some properties involving the degrees of the graph vertices and marginal contributions to overall inequality. An example is finally provided based on a modified multi-factor Atkinson index.

  17. Multi-Factor Analysis for Selecting Lunar Exploration Soft Landing Area and the best Cruise Route

    NASA Astrophysics Data System (ADS)

    Mou, N.; Li, J.; Meng, Z.; Zhang, L.; Liu, W.

    2018-04-01

    Selecting the right soft landing area and planning a reasonable cruise route are the basic tasks of lunar exploration. In this paper, the Von Karman crater in the Antarctic Aitken basin on the back of the moon is used as the study area, and multi-factor analysis is used to evaluate the landing area and cruise route of lunar exploration. The evaluation system mainly includes the factors such as the density of craters, the impact area of craters, the formation of the whole area and the formation of some areas, such as the vertical structure, rock properties and the content of (FeO + TiO2), which can reflect the significance of scientific exploration factor. And the evaluation of scientific exploration is carried out on the basis of safety and feasibility. On the basis of multi-factor superposition analysis, three landing zones A, B and C are selected, and the appropriate cruising route is analyzed through scientific research factors. This study provides a scientific basis for the lunar probe landing and cruise route planning, and it provides technical support for the subsequent lunar exploration.

  18. The Human Performance Envelope: Past Research, Present Activities and Future Directions

    NASA Technical Reports Server (NTRS)

    Edwards, Tamsyn

    2017-01-01

    Air traffic controllers (ATCOs) must maintain a consistently high level of human performance in order to maintain flight safety and efficiency. In current control environments, performance-influencing factors such as workload, fatigue and situation awareness can co-occur, and interact, to effect performance. However, multifactor influences and the association with performance are under-researched. This study utilized a high fidelity human in the loop enroute air traffic control simulation to investigate the relationship between workload, situation awareness and ATCO performance. The study aimed to replicate and extend Edwards, Sharples, Wilson and Kirwans (2012) previous study and confirm multifactor interactions with a participant sample of ex-controllers. The study also aimed to extend Edwards et als previous research by comparing multifactor relationships across 4 automation conditions. Results suggest that workload and SA may interact to produce a cumulative impact on controller performance, although the effect of the interaction on performance may be dependent on the context and amount of automation present. Findings have implications for human-automation teaming in air traffic control, and the potential prediction and support of ATCO performance.

  19. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO2 Enrichment experiment.

    PubMed

    De Kauwe, Martin G; Medlyn, Belinda E; Walker, Anthony P; Zaehle, Sönke; Asao, Shinichi; Guenet, Bertrand; Harper, Anna B; Hickler, Thomas; Jain, Atul K; Luo, Yiqi; Lu, Xingjie; Luus, Kristina; Parton, William J; Shu, Shijie; Wang, Ying-Ping; Werner, Christian; Xia, Jianyang; Pendall, Elise; Morgan, Jack A; Ryan, Edmund M; Carrillo, Yolima; Dijkstra, Feike A; Zelikova, Tamara J; Norby, Richard J

    2017-09-01

    Multifactor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date, such models have only been tested against single-factor experiments. We applied 10 TBMs to the multifactor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2  yr -1 ). Comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the N cycle models, N availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO 2 -induced water savings to extend the growing season length. Observed interactive (CO 2  × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change. © 2017 John Wiley & Sons Ltd.

  20. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO 2 enrichment experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.

    Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less

  1. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO 2 enrichment experiment

    DOE PAGES

    De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.; ...

    2017-02-01

    Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less

  2. The prevalence and structure of obsessive-compulsive personality disorder in Hispanic psychiatric outpatients

    PubMed Central

    Ansell, Emily B.; Pinto, Anthony; Crosby, Ross D.; Becker, Daniel F.; Añez, Luis M.; Paris, Manuel; Grilo, Carlos M.

    2010-01-01

    This study sought to confirm a multi-factor model of Obsessive-compulsive personality disorder (OCPD) in a Hispanic outpatient sample and to explore associations of the OCPD factors with aggression, depression, and suicidal thoughts. One hundred and thirty monolingual, Spanish-speaking participants were recruited from a community mental health center and were assessed by bilingual doctoral level clinicians. OCPD was highly prevalent (26%) in this sample. Multi-factor models of OCPD were tested and the two factors - perfectionism and interpersonal rigidity - provided the best model fit. Interpersonal rigidity was associated with aggression and anger while perfectionism was associated with depression and suicidal thoughts. PMID:20227063

  3. A Multifactor Secure Authentication System for Wireless Payment

    NASA Astrophysics Data System (ADS)

    Sanyal, Sugata; Tiwari, Ayu; Sanyal, Sudip

    Organizations are deploying wireless based online payment applications to expand their business globally, it increases the growing need of regulatory requirements for the protection of confidential data, and especially in internet based financial areas. Existing internet based authentication systems often use either the Web or the Mobile channel individually to confirm the claimed identity of the remote user. The vulnerability is that access is based on only single factor authentication which is not secure to protect user data, there is a need of multifactor authentication. This paper proposes a new protocol based on multifactor authentication system that is both secure and highly usable. It uses a novel approach based on Transaction Identification Code and SMS to enforce another security level with the traditional Login/password system. The system provides a highly secure environment that is simple to use and deploy with in a limited resources that does not require any change in infrastructure or underline protocol of wireless network. This Protocol for Wireless Payment is extended as a two way authentications system to satisfy the emerging market need of mutual authentication and also supports secure B2B communication which increases faith of the user and business organizations on wireless financial transaction using mobile devices.

  4. Severe chronic heart failure in patients considered for heart transplantation in Poland.

    PubMed

    Korewicki, Jerzy; Leszek, Przemysław; Zieliński, Tomasz; Rywik, Tomasz; Piotrowski, Walerian; Kurjata, Paweł; Kozar-Kamińska, Katarzyna; Kodziszewska, Katarzyna

    2012-01-01

    Based on the results of clinical trials, the prognosis for patients with severe heart failure (HF) has improved over the last 20 years. However, clinical trials do not reflect 'real life' due to patient selection. Thus, the aim of the POLKARD-HF registry was the analysis of survival of patients with refractory HF referred for orthotopic heart transplantation (OHT). Between 1 November 2003 and 31 October 2007, 983 patients with severe HF, referred for OHT in Poland, were included into the registry. All patients underwent routine clinical and hemodynamic evaluation, with NT-proBNP and hsCRP assessment. Death or an emergency OHT were assumed as the endpoints. The average observation period was 601 days. Kaplan-Meier curves with log-rank and univariate together with multifactor Cox regression model the stepwise variable selection method were used to determine the predictive value of analyzed variables. Among the 983 patients, the probability of surviving for one year was approximately 80%, for two years 70%, and for three years 67%. Etiology of the HF did not significantly influence the prognosis. The patients in NYHA class IV had a three-fold higher risk of death or emergency OHT. The univariate/multifactor Cox regression analysis revealed that NYHA IV class (HR 2.578, p < 0.0001), HFSS score (HR 2.572, p < 0.0001) and NT-proBNP plasma level (HR 1.600, p = 0.0200), proved to influence survival without death or emergency OHT. Despite optimal treatment, the prognosis for patients with refractory HF is still not good. NYHA class IV, NT-proBNP and HFSS score can help define the highest risk group. The results are consistent with the prognosis of patients enrolled into the randomized trials.

  5. Enhancing leadership and relationships by implementing a peer mentoring program.

    PubMed

    Gafni Lachter, Liat R; Ruland, Judith P

    2018-03-30

    Peer-mentoring is often described as effective means to promote professional and leadership skills, yet evidence on practical models of such programs for occupational therapy students are sparse. The purpose of this study was to evaluate the outcomes of a peer-mentoring program designed for graduate occupational therapy students. Forty-seven second-year student volunteers were randomly assigned to individually mentor first-year students in a year-long program. Students met biweekly virtually or in person to provide mentorship on everyday student issues, according to mentees' needs. Faculty-led group activities prior and during the peer-mentoring program took place to facilitate the mentorship relationships. Program effectiveness was measured using the Multi-factor Leadership Questionnaire (Avolio & Bass, MLQ: Multifactor Leadership Questionnaire, 2004) and an open-ended feedback survey. Results of multi-variate MANOVA for repeated measures indicating significant enhancement in several leadership skills (F(12,46) = 4.0, P = 0.001, η 2  = 0.579). Qualitative data from feedback surveys indicated that an opportunity to help; forming relationships; and structure as enabler were perceived as important participation outcomes. Students expressed high satisfaction and perceived value from their peer-mentoring experience. As we seek ways to promote our profession and the leadership of its members, it is recommended to consider student peer-mentoring to empower them to practice and advance essential career skills from the initial stages of professional development. Evidence found in this study demonstrates that peer-mentoring programs can promote leadership development and establishment of networks in an occupational therapy emerging professional community, at a low cost. The peer-mentoring blueprint and lessons learned are presented with hopes to inspire others to implement peer-mentoring programs in their settings. © 2018 Occupational Therapy Australia.

  6. A Simple and Computationally Efficient Approach to Multifactor Dimensionality Reduction Analysis of Gene-Gene Interactions for Quantitative Traits

    PubMed Central

    Gui, Jiang; Moore, Jason H.; Williams, Scott M.; Andrews, Peter; Hillege, Hans L.; van der Harst, Pim; Navis, Gerjan; Van Gilst, Wiek H.; Asselbergs, Folkert W.; Gilbert-Diamond, Diane

    2013-01-01

    We present an extension of the two-class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of a quantitative trait. The proposed Quantitative MDR (QMDR) method handles continuous data by modifying MDR’s constructive induction algorithm to use a T-test. QMDR replaces the balanced accuracy metric with a T-test statistic as the score to determine the best interaction model. We used a simulation to identify the empirical distribution of QMDR’s testing score. We then applied QMDR to genetic data from the ongoing prospective Prevention of Renal and Vascular End-Stage Disease (PREVEND) study. PMID:23805232

  7. Computational analysis of gene-gene interactions using multifactor dimensionality reduction.

    PubMed

    Moore, Jason H

    2004-11-01

    Understanding the relationship between DNA sequence variations and biologic traits is expected to improve the diagnosis, prevention and treatment of common human diseases. Success in characterizing genetic architecture will depend on our ability to address nonlinearities in the genotype-to-phenotype mapping relationship as a result of gene-gene interactions, or epistasis. This review addresses the challenges associated with the detection and characterization of epistasis. A novel strategy known as multifactor dimensionality reduction that was specifically designed for the identification of multilocus genetic effects is presented. Several case studies that demonstrate the detection of gene-gene interactions in common diseases such as atrial fibrillation, Type II diabetes and essential hypertension are also discussed.

  8. The prevalence and structure of obsessive-compulsive personality disorder in Hispanic psychiatric outpatients.

    PubMed

    Ansell, Emily B; Pinto, Anthony; Crosby, Ross D; Becker, Daniel F; Añez, Luis M; Paris, Manuel; Grilo, Carlos M

    2010-09-01

    This study sought to confirm a multi-factor model of Obsessive-compulsive personality disorder (OCPD) in a Hispanic outpatient sample and to explore associations of the OCPD factors with aggression, depression, and suicidal thoughts. One hundred and thirty monolingual, Spanish-speaking participants were recruited from a community mental health center and were assessed by bilingual doctoral-level clinicians. OCPD was highly prevalent (26%) in this sample. Multi-factor models of OCPD were tested and the two factors - perfectionism and interpersonal rigidity - provided the best model fit. Interpersonal rigidity was associated with aggression and anger while perfectionism was associated with depression and suicidal thoughts. (c) 2010 Elsevier Ltd. All rights reserved.

  9. What mental health teams want in their leaders.

    PubMed

    Corrigan, P W; Garman, A N; Lam, C; Leary, M

    1998-11-01

    The authors present the findings of the first phase of a 3-year study developing a skills training curriculum for mental health team leaders. A factor model empirically generated from clinical team members was compared to Bass' (1990) Multifactor Model of Leadership. Members of mental health teams generated individual responses to questions about effective leaders. Results from this survey were subsequently administered to a sample of mental health team members. Analysis of these data yielded six factors: Autocratic Leadership, Clear Roles and Goals, Reluctant Leadership, Vision, Diversity Issues, and Supervision. Additional analyses suggest Bass' Multifactor Model offers a useful paradigm for developing a curriculum specific to the needs of mental health team leaders.

  10. Probabilistic material strength degradation model for Inconel 718 components subjected to high temperature, high-cycle and low-cycle mechanical fatigue, creep and thermal fatigue effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie C.; Boyce, Lola

    1995-01-01

    This report presents the results of both the fifth and sixth year effort of a research program conducted for NASA-LeRC by The University of Texas at San Antonio (UTSA). The research included on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for five variables, namely, high temperature, high-cycle and low-cycle mechanical fatigue, creep and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using an updated version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of high-cycle mechanical fatigue, creep and thermal fatigue was performed. Then using the current version of PROMISS, entitled PROMISS94, a second sensitivity study including the effect of low-cycle mechanical fatigue, as well as, the three previous effects was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of high-cycle mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.

  11. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  12. The emotional memory effect: differential processing or item distinctiveness?

    PubMed

    Schmidt, Stephen R; Saari, Bonnie

    2007-12-01

    A color-naming task was followed by incidental free recall to investigate how emotional words affect attention and memory. We compared taboo, nonthreatening negative-affect, and neutral words across three experiments. As compared with neutral words, taboo words led to longer color-naming times and better memory in both within- and between-subjects designs. Color naming of negative-emotion nontaboo words was slower than color naming of neutral words only during block presentation and at relatively short interstimulus intervals (ISIs). The nontaboo emotion words were remembered better than neutral words following blocked and random presentation and at both long and short ISIs, but only in mixed-list designs. Our results support multifactor theories of the effects of emotion on attention and memory. As compared with neutral words, threatening stimuli received increased attention, poststimulus elaboration, and benefit from item distinctiveness, whereas nonthreatening emotional stimuli benefited only from increased item distinctiveness.

  13. Evolution of basic equations for nearshore wave field

    PubMed Central

    ISOBE, Masahiko

    2013-01-01

    In this paper, a systematic, overall view of theories for periodic waves of permanent form, such as Stokes and cnoidal waves, is described first with their validity ranges. To deal with random waves, a method for estimating directional spectra is given. Then, various wave equations are introduced according to the assumptions included in their derivations. The mild-slope equation is derived for combined refraction and diffraction of linear periodic waves. Various parabolic approximations and time-dependent forms are proposed to include randomness and nonlinearity of waves as well as to simplify numerical calculation. Boussinesq equations are the equations developed for calculating nonlinear wave transformations in shallow water. Nonlinear mild-slope equations are derived as a set of wave equations to predict transformation of nonlinear random waves in the nearshore region. Finally, wave equations are classified systematically for a clear theoretical understanding and appropriate selection for specific applications. PMID:23318680

  14. A robust multifactor dimensionality reduction method for detecting gene-gene interactions with application to the genetic analysis of bladder cancer susceptibility

    PubMed Central

    Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.

    2010-01-01

    A central goal of human genetics is to identify and characterize susceptibility genes for common complex human diseases. An important challenge in this endeavor is the modeling of gene-gene interaction or epistasis that can result in non-additivity of genetic effects. The multifactor dimensionality reduction (MDR) method was developed as machine learning alternative to parametric logistic regression for detecting interactions in absence of significant marginal effects. The goal of MDR is to reduce the dimensionality inherent in modeling combinations of polymorphisms using a computational approach called constructive induction. Here, we propose a Robust Multifactor Dimensionality Reduction (RMDR) method that performs constructive induction using a Fisher’s Exact Test rather than a predetermined threshold. The advantage of this approach is that only those genotype combinations that are determined to be statistically significant are considered in the MDR analysis. We use two simulation studies to demonstrate that this approach will increase the success rate of MDR when there are only a few genotype combinations that are significantly associated with case-control status. We show that there is no loss of success rate when this is not the case. We then apply the RMDR method to the detection of gene-gene interactions in genotype data from a population-based study of bladder cancer in New Hampshire. PMID:21091664

  15. A Combinatorial Approach to Detecting Gene-Gene and Gene-Environment Interactions in Family Studies

    PubMed Central

    Lou, Xiang-Yang; Chen, Guo-Bo; Yan, Lei; Ma, Jennie Z.; Mangold, Jamie E.; Zhu, Jun; Elston, Robert C.; Li, Ming D.

    2008-01-01

    Widespread multifactor interactions present a significant challenge in determining risk factors of complex diseases. Several combinatorial approaches, such as the multifactor dimensionality reduction (MDR) method, have emerged as a promising tool for better detecting gene-gene (G × G) and gene-environment (G × E) interactions. We recently developed a general combinatorial approach, namely the generalized multifactor dimensionality reduction (GMDR) method, which can entertain both qualitative and quantitative phenotypes and allows for both discrete and continuous covariates to detect G × G and G × E interactions in a sample of unrelated individuals. In this article, we report the development of an algorithm that can be used to study G × G and G × E interactions for family-based designs, called pedigree-based GMDR (PGMDR). Compared to the available method, our proposed method has several major improvements, including allowing for covariate adjustments and being applicable to arbitrary phenotypes, arbitrary pedigree structures, and arbitrary patterns of missing marker genotypes. Our Monte Carlo simulations provide evidence that the PGMDR method is superior in performance to identify epistatic loci compared to the MDR-pedigree disequilibrium test (PDT). Finally, we applied our proposed approach to a genetic data set on tobacco dependence and found a significant interaction between two taste receptor genes (i.e., TAS2R16 and TAS2R38) in affecting nicotine dependence. PMID:18834969

  16. Random Attractors for the Stochastic Navier-Stokes Equations on the 2D Unit Sphere

    NASA Astrophysics Data System (ADS)

    Brzeźniak, Z.; Goldys, B.; Le Gia, Q. T.

    2018-03-01

    In this paper we prove the existence of random attractors for the Navier-Stokes equations on 2 dimensional sphere under random forcing irregular in space and time. We also deduce the existence of an invariant measure.

  17. Multifactor Screener in OPEN: Scoring Procedures & Results

    Cancer.gov

    Scoring procedures were developed to convert a respondent's screener responses to estimates of individual dietary intake for percentage energy from fat, grams of fiber, and servings of fruits and vegetables.

  18. The living Drake equation of the Tau Zero Foundation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-03-01

    The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.

  19. The time-fractional radiative transport equation—Continuous-time random walk, diffusion approximation, and Legendre-polynomial expansion

    NASA Astrophysics Data System (ADS)

    Machida, Manabu

    2017-01-01

    We consider the radiative transport equation in which the time derivative is replaced by the Caputo derivative. Such fractional-order derivatives are related to anomalous transport and anomalous diffusion. In this paper we describe how the time-fractional radiative transport equation is obtained from continuous-time random walk and see how the equation is related to the time-fractional diffusion equation in the asymptotic limit. Then we solve the equation with Legendre-polynomial expansion.

  20. Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h

    2010-11-01

    In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.

  1. The Multi-factor Predictive Seis &Gis Model of Ecological, Genetical, Population Health Risk and Bio-geodynamic Processes In Geopathogenic Zones

    NASA Astrophysics Data System (ADS)

    Bondarenko, Y.

    I. Goal and Scope. Human birth rate decrease, death-rate growth and increase of mu- tagenic deviations risk take place in geopathogenic and anthropogenic hazard zones. Such zones create unfavourable conditions for reproductive process of future genera- tions. These negative trends should be considered as a protective answer of the com- plex biosocial system to the appearance of natural and anthropogenic risk factors that are unfavourable for human health. The major goals of scientific evaluation and de- crease of risk of appearance of hazardous processes on the territory of Dnipropetrovsk, along with creation of the multi-factor predictive Spirit-Energy-Information Space "SEIS" & GIS Model of ecological, genetical and population health risk in connection with dangerous bio-geodynamic processes, were: multi-factor modeling and correla- tion of natural and anthropogenic environmental changes and those of human health; determination of indicators that show the risk of destruction structures appearance on different levels of organization and functioning of the city ecosystem (geophys- ical and geochemical fields, soil, hydrosphere, atmosphere, biosphere); analysis of regularities of natural, anthropogenic, and biological rhythms' interactions. II. Meth- ods. The long spatio-temporal researches (Y. Bondarenko, 1996, 2000) have proved that the ecological, genetic and epidemiological processes are in connection with de- velopment of dangerous bio-geophysical and bio-geodynamic processes. Mathemat- ical processing of space photos, lithogeochemical and geophysical maps with use of JEIS o and ERDAS o computer systems was executed at the first stage of forma- tion of multi-layer geoinformation model "Dnipropetrovsk ARC View GIS o. The multi-factor nonlinear correlation between solar activity and cosmic ray variations, geophysical, geodynamic, geochemical, atmospheric, technological, biological, socio- economical processes and oncologic case rate frequency, general and primary popula- tion sickness cases in Dnipropetrovsk City (1.2 million persons) are described by the multi-factor predictive SEIS & GIS model of geopathogenic zones that determines the human health risk and hazards. Results and Conclusions. We have created the SEIS system and multi-factor predictive SEIS model for the analysis of phase-metric spatio- 1 temporal nonlinear correlation and variations of rhythms of human health, ecological, genetic, epidemiological risks, demographic, socio-economic, bio-geophysical, bio- geodynamic processes in geopathogenic hazard zones. Cosmophotomaps "CPM" of vegetation index, anthropogenic-landscape and landscape-geophysical human health risk of Dnipropetrovsk City present synthesis-based elements of multi-layer GIS, which include multispectral images SPOT o, maps of different geophysical, geochem- ical, anthropogenic and citogenic risk factors, maps of integral oncologic case rate frequency, general and primary population sickness cases for administrative districts. Results of multi-layer spatio-temporal correlation of geophysical field parameters and variations of population sickness rate rhythms have enabled us to state grounds and to develop medico-biological and bio-geodynamic classification of geopathogenic zones. Bio-geodynamic model has served to define contours of anthropogenic-landscape and landscape-geophysical human health risk in Dnipropetrovsk City. Biorhythmic vari- ations give foundation for understanding physiological mechanisms of organism`s adaptation to extreme helio-geophysical and bio-geodynamic environmental condi- tions, which are dictated by changes in Multi-factor Correlation Stress Field "MCSF" with deformation of 5D SEIS. Interaction between organism and environment results in continuous superpositioning of external (exogenic) Nuclear-Molecular-Cristallic "NMC" MCSF rhythms on internal (endogenic) Nuclear-Molecular-Cellular "NMCl" MCSF rhythms. Their resonance wave (energy-information) integration and disinte- gration are responsible for structural and functional state of different physiological systems. Herewith, complex restructurization of defense functions blocks the adapta- tion process and may turn to be the primary reason for phase shifting, process and biorhythms hindering, appearance of different deseases. Interaction of biorhythms with natural and anthropogenic rhythms specify the peculiar features of environ- mental adaptation of living species. Such interaction results in correlation of sea- sonal rhythms in variations of thermo-baro-geodynamic "TBG" parameters of am- bient air with toxic concentration and human health risk in Dnipropetrovsk City. Bio-geodynamic analysis of medical and demographic situations has provided for search of spatio-temporal correlation between rhythms of general and primary pop- ulation sickness cases and oncologic case rate frequency, other medico-demographic rhythms, natural processes (helio-geophysical, thermodynamic, geodynamic) and an- thropogenic processes (industrial and houschold waste disposal, toxic emissions and their concentration in ambient air). The year of 1986, the year of minimum helio- geophysical activity "2G1dG1" and maximum anthropogenic processes associated with changes in sickness and death rates of the population of Earth were synchronized. With account of quantum character of SEIS rhythms, 5 reference levels of desyn- chronized helio-geophysical and bio-geodynamic processes affecting population sick- ness rate have been specified within bio-geodynamic models. The first reference level 2 of SEIS desynchronization includes rhythms with period of 22,5 years: ... 1958,2; 1980,7; 2003,2; .... The second reference level of SEIS desynchronization includes rhythms with period of 11,25 years: ... 1980,7; 1992; 2003,2;.... The third reference level covers 5,625-years periodic rhythms2:... 1980,7; 1986,3; 1992; 1997,6; 2003,2; .... The fourth quantum reference level includes rhythms 3 with period of 2,8125 years: ... 1980,7; 1983,5; 1986,3; 1989,1; 1992; 1994,8; 1997,6; 2000,4; 2003,2; .... Rhythms with 1,40625-years period fall is fifth reference level of SEIS desynchro- nization: ...1980,7; 1982,1; 1983,5; 1984,9; 1986,3; 1987,7; 1989,1; 1990,5; 1992; 1993,3; 1994,8; 1996,2; 1997,6; 1999; 2000,4; 2001,8; 2003,2;.... Analysis of alternat- ing medical and demographic situation in Ukraine (1981-1992)and in Dnipropetrovsk (1988-1995)has allowed to back up theoretical model of various-level rhythm quan- tum, with non-linear regularities due to phase-metric spatio-temporal deformation be- ing specified. Application of new technologies of Risk Analysis, Sinthesis and SEIS Modeling at the choice of a burial place for dangerous radioactive wastes in the zone of Chernobyl nuclear disaster (Shestopalov V., Bondarenko Y...., 1998) has shown their very high efficiency in comparison with GIS Analysis. IV.Recommendations and Outlook. In order to draw a conclusion regarding bio-geodynamic modeling of spatio-temporal structure of areas where common childhood sickness rate exists, it is necessary to mention that the only thing that can favour to exact predicting of where and when important catastrophes and epidemies will take place is correct and complex bio-geodynamic modeling. Imperfection of present GIS is the result of the lack of interactive facilities for multi-factor modeling of nonlinear natural and an- thropogenic processes. Equations' coefficients calculated for some areas are often irrelevant when applied to others. In this connection there arises a number of prob- lems concerning practical application and reliability of GIS-models that are used to carry out efficient ecological monitoring. References Bondarenko Y., 1997, Drawing up Cosmophotomaps and Multi-factor Forecasting of Hazard of Development of Dan- gerous Geodynamic Processes in Dnipropetrovsk,The Technically-Natural Problems of failures and catastrophes in connection with development of dangerous geological processes, Kiev, Ukraine, 1997. Bondarenko Y., 1997, The Methodology of a State the Value of Quality of the Ground and the House Level them Ecology-Genetic-Toxic of the human health risk based on multi-layer cartographical model", Experience of application GIS - Technologies for creating Cadastral Systems, Yalta, Ukraine, 1997, p. 39-40. Shestopalov V., Bondarenko Y., Zayonts I., Rudenko Y. , Bohuslavsky A., 1998, Complexation of Structural-Geodynamical and Hydrogeological Methods of Studying Areas to Reveal Geological Structural Perspectives for Deep Isolation of Radioactive Wastes, Field Testing and Associated Modeling of Potential High-Level Nuclear Waste Geologic Disposal Sites, Berkeley, USA, 1998, p.81-82. 3

  2. Leveraging Commercially Issued Multi-Factor Identification Credentials

    NASA Technical Reports Server (NTRS)

    Baldridge, Tim W.

    2010-01-01

    This slide presentation reviews the Identity, Credential and Access Management (ICAM) system. This system is a complete system of identity management, access to desktops and applications, use of smartcards, and building access throughout NASA.

  3. Global solutions to random 3D vorticity equations for small initial data

    NASA Astrophysics Data System (ADS)

    Barbu, Viorel; Röckner, Michael

    2017-11-01

    One proves the existence and uniqueness in (Lp (R3)) 3, 3/2 < p < 2, of a global mild solution to random vorticity equations associated to stochastic 3D Navier-Stokes equations with linear multiplicative Gaussian noise of convolution type, for sufficiently small initial vorticity. This resembles some earlier deterministic results of T. Kato [16] and are obtained by treating the equation in vorticity form and reducing the latter to a random nonlinear parabolic equation. The solution has maximal regularity in the spatial variables and is weakly continuous in (L3 ∩L 3p/4p - 6)3 with respect to the time variable. Furthermore, we obtain the pathwise continuous dependence of solutions with respect to the initial data. In particular, one gets a locally unique solution of 3D stochastic Navier-Stokes equation in vorticity form up to some explosion stopping time τ adapted to the Brownian motion.

  4. Inverse random source scattering for the Helmholtz equation in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Li, Ming; Chen, Chuchu; Li, Peijun

    2018-01-01

    This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  5. The one-dimensional asymmetric persistent random walk

    NASA Astrophysics Data System (ADS)

    Rossetto, Vincent

    2018-04-01

    Persistent random walks are intermediate transport processes between a uniform rectilinear motion and a Brownian motion. They are formed by successive steps of random finite lengths and directions travelled at a fixed speed. The isotropic and symmetric 1D persistent random walk is governed by the telegrapher’s equation, also called the hyperbolic heat conduction equation. These equations have been designed to resolve the paradox of the infinite speed in the heat and diffusion equations. The finiteness of both the speed and the correlation length leads to several classes of random walks: Persistent random walk in one dimension can display anomalies that cannot arise for Brownian motion such as anisotropy and asymmetries. In this work we focus on the case where the mean free path is anisotropic, the only anomaly leading to a physics that is different from the telegrapher’s case. We derive exact expression of its Green’s function, for its scattering statistics and distribution of first-passage time at the origin. The phenomenology of the latter shows a transition for quantities like the escape probability and the residence time.

  6. Cyclic Load Effects on Long Term Behavior of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Chamis, C. C.

    1996-01-01

    A methodology to compute the fatigue life for different ratios, r, of applied stress to the laminate strength based on first ply failure criteria combined with thermal cyclic loads has been developed and demonstrated. Degradation effects resulting from long term environmental exposure and thermo-mechanical cyclic loads are considered in the simulation process. A unified time-stress dependent multi-factor interaction equation model developed at NASA Lewis Research Center has been used to account for the degradation of material properties caused by cyclic and aging loads. Effect of variation in the thermal cyclic load amplitude on a quasi-symmetric graphite/epoxy laminate has been studied with respect to the impending failure modes. The results show that, for the laminate under consideration, the fatigue life under combined mechanical and low thermal amplitude cyclic loads is higher than that due to mechanical loads only. However, as the thermal amplitude increases, the life also decreases. The failure mode changes from tensile under mechanical loads only to the compressive and shear at high mechanical and thermal loads. Also, implementation of the developed methodology in the design process has been discussed.

  7. Multifactor estimation of ecological risks using numerical simulation

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, G.; Shalamov, K.; Khairetdinov, M.; Kovalevsky, V.

    2017-10-01

    In this paper, the problem of interaction of acoustic waves falling at a given angle on a snow layer on the ground and seismic waves arising both in this layer and in the ground is considered. A system of differential equations with boundary conditions describing the propagation of incident and reflected acoustic waves in the air refracted and reflected from the boundary of seismic waves in elastic media (snow and ground) is constructed and solved for a three-layer air-snow layer-ground model. The coefficients of reflection and refraction are calculated in the case of an acoustic wave falling onto both the ground and snow on the ground. The ratio of the energy of the refracted waves to the energy of the falling acoustic wave is obtained. It is noted that snow has a strong influence on the energy transfer into the ground, which can decrease by more than an order of magnitude. The numerical results obtained are consistent with the results of field experiments with a vibrational source performed by the Siberian Branch of the Russian Academy of Sciences.

  8. Applications of Random Differential Equations to Engineering Science. Wave Propagation in Turbulent Media and Random Linear Hyperbolic Systems.

    DTIC Science & Technology

    1981-11-10

    1976), 745-754. 4. (with W. C. Tam) Periodic and traveling wave solutions to Volterra - Lotka equation with diffusion. Bull. Math. Biol. 38 (1976), 643...with applications [17,19,20). (5) A general method for reconstructing the mutual coherent function of a static or moving source from the random

  9. Exact PDF equations and closure approximations for advective-reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venturi, D.; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.

    2013-06-01

    Mathematical models of advection–reaction phenomena rely on advective flow velocity and (bio) chemical reaction rates that are notoriously random. By using functional integral methods, we derive exact evolution equations for the probability density function (PDF) of the state variables of the advection–reaction system in the presence of random transport velocity and random reaction rates with rather arbitrary distributions. These PDF equations are solved analytically for transport with deterministic flow velocity and a linear reaction rate represented mathematically by a heterog eneous and strongly-correlated random field. Our analytical solution is then used to investigate the accuracy and robustness of the recentlymore » proposed large-eddy diffusivity (LED) closure approximation [1]. We find that the solution to the LED-based PDF equation, which is exact for uncorrelated reaction rates, is accurate even in the presence of strong correlations and it provides an upper bound of predictive uncertainty.« less

  10. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  11. Management of suicidal and self-harming behaviors in prisons: systematic literature review of evidence-based activities.

    PubMed

    Barker, Emma; Kõlves, Kairi; De Leo, Diego

    2014-01-01

    The purpose of this study was to systematically analyze existing literature testing the effectiveness of programs involving the management of suicidal and self-harming behaviors in prisons. For the study, 545 English-language articles published in peer reviewed journals were retrieved using the terms "suicid*," "prevent*," "prison," or "correctional facility" in SCOPUS, MEDLINE, PROQUEST, and Web of Knowledge. In total, 12 articles were relevant, with 6 involving multi-factored suicide prevention programs, and 2 involving peer focused programs. Others included changes to the referral and care of suicidal inmates, staff training, legislation changes, and a suicide prevention program for inmates with Borderline Personality Disorder. Multi-factored suicide prevention programs appear most effective in the prison environment. Using trained inmates to provide social support to suicidal inmates is promising. Staff attitudes toward training programs were generally positive.

  12. Entropy measure of credit risk in highly correlated markets

    NASA Astrophysics Data System (ADS)

    Gottschalk, Sylvia

    2017-07-01

    We compare the single and multi-factor structural models of corporate default by calculating the Jeffreys-Kullback-Leibler divergence between their predicted default probabilities when asset correlations are either high or low. Single-factor structural models assume that the stochastic process driving the value of a firm is independent of that of other companies. A multi-factor structural model, on the contrary, is built on the assumption that a single firm's value follows a stochastic process correlated with that of other companies. Our main results show that the divergence between the two models increases in highly correlated, volatile, and large markets, but that it is closer to zero in small markets, when asset correlations are low and firms are highly leveraged. These findings suggest that during periods of financial instability, when asset volatility and correlations increase, one of the models misreports actual default risk.

  13. Application of the multifactor dimensionality reduction method in evaluation of the roles of multiple genes/enzymes in multidrug-resistant acquisition in Pseudomonas aeruginosa strains.

    PubMed

    Yao, Z; Peng, Y; Bi, J; Xie, C; Chen, X; Li, Y; Ye, X; Zhou, J

    2016-03-01

    Multidrug-resistant Pseudomonas aeruginosa (MDRPA) infections are major threats to healthcare-associated infection control and the intrinsic molecular mechanisms of MDRPA are also unclear. We examined 348 isolates of P. aeruginosa, including 188 MDRPA and 160 non-MDRPA, obtained from five tertiary-care hospitals in Guangzhou, China. Significant correlations were found between gene/enzyme carriage and increased rates of antimicrobial resistance (P < 0·01). gyrA mutation, OprD loss and metallo-β-lactamase (MBL) presence were identified as crucial molecular risk factors for MDRPA acquisition by a combination of univariate logistic regression and a multifactor dimensionality reduction approach. The MDRPA rate was also elevated with the increase in positive numbers of those three determinants (P < 0·001). Thus, gyrA mutation, OprD loss and MBL presence may serve as predictors for early screening of MDRPA infections in clinical settings.

  14. Mining nutrigenetics patterns related to obesity: use of parallel multifactor dimensionality reduction.

    PubMed

    Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K

    2015-01-01

    This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.

  15. The Research on Tunnel Surrounding Rock Classification Based on Geological Radar and Probability Theory

    NASA Astrophysics Data System (ADS)

    Xiao Yong, Zhao; Xin, Ji Yong; Shuang Ying, Zuo

    2018-03-01

    In order to effectively classify the surrounding rock types of tunnels, a multi-factor tunnel surrounding rock classification method based on GPR and probability theory is proposed. Geological radar was used to identify the geology of the surrounding rock in front of the face and to evaluate the quality of the rock face. According to the previous survey data, the rock uniaxial compressive strength, integrity index, fissure and groundwater were selected for classification. The related theories combine them into a multi-factor classification method, and divide the surrounding rocks according to the great probability. Using this method to classify the surrounding rock of the Ma’anshan tunnel, the surrounding rock types obtained are basically the same as those of the actual surrounding rock, which proves that this method is a simple, efficient and practical rock classification method, which can be used for tunnel construction.

  16. Spectroscopically Enhanced Method and System for Multi-Factor Biometric Authentication

    NASA Astrophysics Data System (ADS)

    Pishva, Davar

    This paper proposes a spectroscopic method and system for preventing spoofing of biometric authentication. One of its focus is to enhance biometrics authentication with a spectroscopic method in a multifactor manner such that a person's unique ‘spectral signatures’ or ‘spectral factors’ are recorded and compared in addition to a non-spectroscopic biometric signature to reduce the likelihood of imposter getting authenticated. By using the ‘spectral factors’ extracted from reflectance spectra of real fingers and employing cluster analysis, it shows how the authentic fingerprint image presented by a real finger can be distinguished from an authentic fingerprint image embossed on an artificial finger, or molded on a fingertip cover worn by an imposter. This paper also shows how to augment two widely used biometrics systems (fingerprint and iris recognition devices) with spectral biometrics capabilities in a practical manner and without creating much overhead or inconveniencing their users.

  17. Machine Learning for Detecting Gene-Gene Interactions

    PubMed Central

    McKinney, Brett A.; Reif, David M.; Ritchie, Marylyn D.; Moore, Jason H.

    2011-01-01

    Complex interactions among genes and environmental factors are known to play a role in common human disease aetiology. There is a growing body of evidence to suggest that complex interactions are ‘the norm’ and, rather than amounting to a small perturbation to classical Mendelian genetics, interactions may be the predominant effect. Traditional statistical methods are not well suited for detecting such interactions, especially when the data are high dimensional (many attributes or independent variables) or when interactions occur between more than two polymorphisms. In this review, we discuss machine-learning models and algorithms for identifying and characterising susceptibility genes in common, complex, multifactorial human diseases. We focus on the following machine-learning methods that have been used to detect gene-gene interactions: neural networks, cellular automata, random forests, and multifactor dimensionality reduction. We conclude with some ideas about how these methods and others can be integrated into a comprehensive and flexible framework for data mining and knowledge discovery in human genetics. PMID:16722772

  18. A survey about methods dedicated to epistasis detection.

    PubMed

    Niel, Clément; Sinoquet, Christine; Dina, Christian; Rocheleau, Ghislain

    2015-01-01

    During the past decade, findings of genome-wide association studies (GWAS) improved our knowledge and understanding of disease genetics. To date, thousands of SNPs have been associated with diseases and other complex traits. Statistical analysis typically looks for association between a phenotype and a SNP taken individually via single-locus tests. However, geneticists admit this is an oversimplified approach to tackle the complexity of underlying biological mechanisms. Interaction between SNPs, namely epistasis, must be considered. Unfortunately, epistasis detection gives rise to analytic challenges since analyzing every SNP combination is at present impractical at a genome-wide scale. In this review, we will present the main strategies recently proposed to detect epistatic interactions, along with their operating principle. Some of these methods are exhaustive, such as multifactor dimensionality reduction, likelihood ratio-based tests or receiver operating characteristic curve analysis; some are non-exhaustive, such as machine learning techniques (random forests, Bayesian networks) or combinatorial optimization approaches (ant colony optimization, computational evolution system).

  19. Interrelationships of metal transfer factor under wastewater reuse and soil pollution.

    PubMed

    Papaioannou, D; Kalavrouziotis, I K; Koukoulakis, P H; Papadopoulos, F; Psoma, P

    2018-06-15

    The transfer of heavy metals under soil pollution wastewater reuse was studied in a Greenhouse experiment using a randomized block design, including 6 treatments of heavy metals mixtures composed of Zn, Mn, Cd, Co, Cu, Cr, Ni, and Pb, where each metal was taking part in the mixture with 0, 10, 20, 30, 40, 50 mg/kg respectively, in four replications. The Beta vulgaris L (beet) was used as a test plant. It was found that the metal transfer factors were statistically significantly related to the: (i) DTPA extractable soil metals, (ii) the soil pollution level as assessed by the pollution indices, (iii) the soil pH, (iv) the beet dry matter yield and (v) the interactions between the heavy metals in the soil. It was concluded that the Transfer Factor is subjected to multifactor effects and its real nature is complex, and there is a strong need for further study for the understanding of its role in metal-plant relationships. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A two-level stochastic collocation method for semilinear elliptic equations with random coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Luoping; Zheng, Bin; Lin, Guang

    In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less

  1. Transformational, transactional, and passive-avoidant leadership characteristics of a surgical resident cohort: analysis using the multifactor leadership questionnaire and implications for improving surgical education curriculums.

    PubMed

    Horwitz, Irwin B; Horwitz, Sujin K; Daram, Pallavi; Brandt, Mary L; Brunicardi, F Charles; Awad, Samir S

    2008-07-01

    The need for leadership training has become recognized as being highly important to improving medical care, and should be included in surgical resident education curriculums. Surgical residents (n = 65) completed the 5x-short version of the Multifactor Leadership Questionnaire as a means of identifying leadership areas most in need of training among medical residents. The leadership styles of the residents were measured on 12 leadership scales. Comparisons between gender and postgraduate year (PGY) and comparisons to national norms were conducted. Of 12 leadership scales, the residents as a whole had significantly higher management by exception active and passive scores than those of the national norm (t = 6.6, P < 0.01, t = 2.8, P < 0.01, respectively), and significantly lower individualized consideration scores than the norm (t = 2.7, P < 0.01). Only one score, management by exception active was statistically different and higher among males than females (t = 2.12, P < 0.05). PGY3-5 had significantly lower laissez-faire scores than PGY1-2 (t = 2.20, P < 0.05). Principal component analysis revealed two leadership factors with eigenvalues over 1.0. Hierarchical regression found evidence of an augmentation effect for transformational leadership. Areas of resident leadership strengths and weaknesses were identified. The Multifactor Leadership Questionnaire was demonstrated to be a valuable tool for identifying specific areas where leadership training would be most beneficial in the educational curriculum. The future use of this instrument could prove valuable to surgical education training programs.

  2. Development and validation of a multifactor mindfulness scale in youth: The Comprehensive Inventory of Mindfulness Experiences-Adolescents (CHIME-A).

    PubMed

    Johnson, Catherine; Burke, Christine; Brinkman, Sally; Wade, Tracey

    2017-03-01

    Mindfulness-based interventions show consistent benefits in adults for a range of pathologies, but exploration of these approaches in youth is an emergent field, with limited measures of mindfulness for this population. This study aimed to investigate whether multifactor scales of mindfulness can be used in adolescents. A series of studies are presented assessing the performance of a recently developed adult measure, the Comprehensive Inventory of Mindfulness Experiences (CHIME) in 4 early adolescent samples. Study 1 was an investigation of how well the full adult measure (37 items) was understood by youth (N = 292). Study 2 piloted a revision of items in child friendly language with a small group (N = 48). The refined questionnaire for adolescents (CHIME-A) was then tested in Study 3 in a larger sample (N = 461) and subjected to exploratory factor analysis and a range of external validity measures. Study 4 was a confirmatory factor analysis in a new sample (N = 498) with additional external validity measures. Study 5 tested temporal stability (N = 120). Results supported an 8-factor 25-item measure of mindfulness in adolescents, with excellent model fit indices and sound internal consistency for the 8 subscales. Although the CFA supported an overarching factor, internal reliability of a combined total score was poor. The development of a multifactor measure represents a first step toward testing developmental models of mindfulness in young people. This in turn will aid construction of evidence based interventions that are not simply downward derivations of adult mindfulness programs. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. A detailed view on Model-Based Multifactor Dimensionality Reduction for detecting gene-gene interactions in case-control data in the absence and presence of noise

    PubMed Central

    CATTAERT, TOM; CALLE, M. LUZ; DUDEK, SCOTT M.; MAHACHIE JOHN, JESTINAH M.; VAN LISHOUT, FRANÇOIS; URREA, VICTOR; RITCHIE, MARYLYN D.; VAN STEEN, KRISTEL

    2010-01-01

    SUMMARY Analyzing the combined effects of genes and/or environmental factors on the development of complex diseases is a great challenge from both the statistical and computational perspective, even using a relatively small number of genetic and non-genetic exposures. Several data mining methods have been proposed for interaction analysis, among them, the Multifactor Dimensionality Reduction Method (MDR), which has proven its utility in a variety of theoretical and practical settings. Model-Based Multifactor Dimensionality Reduction (MB-MDR), a relatively new MDR-based technique that is able to unify the best of both non-parametric and parametric worlds, was developed to address some of the remaining concerns that go along with an MDR-analysis. These include the restriction to univariate, dichotomous traits, the absence of flexible ways to adjust for lower-order effects and important confounders, and the difficulty to highlight epistasis effects when too many multi-locus genotype cells are pooled into two new genotype groups. Whereas the true value of MB-MDR can only reveal itself by extensive applications of the method in a variety of real-life scenarios, here we investigate the empirical power of MB-MDR to detect gene-gene interactions in the absence of any noise and in the presence of genotyping error, missing data, phenocopy, and genetic heterogeneity. For the considered simulation settings, we show that the power is generally higher for MB-MDR than for MDR, in particular in the presence of genetic heterogeneity, phenocopy, or low minor allele frequencies. PMID:21158747

  4. Confounding Problems in Multifactor AOV When Using Several Organismic Variables of Limited Reliability

    ERIC Educational Resources Information Center

    Games, Paul A.

    1975-01-01

    A brief introduction is presented on how multiple regression and linear model techniques can handle data analysis situations that most educators and psychologists think of as appropriate for analysis of variance. (Author/BJG)

  5. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  6. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation

    PubMed Central

    Müller, Eike H.; Scheichl, Rob; Shardlow, Tony

    2015-01-01

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075

  7. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation.

    PubMed

    Müller, Eike H; Scheichl, Rob; Shardlow, Tony

    2015-04-08

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.

  8. Connecting to HPC Systems | High-Performance Computing | NREL

    Science.gov Websites

    one of the following methods, which use multi-factor authentication. First, you will need to set up If you just need access to a command line on an HPC system, use one of the following methods

  9. Non-linear continuous time random walk models★

    NASA Astrophysics Data System (ADS)

    Stage, Helena; Fedotov, Sergei

    2017-11-01

    A standard assumption of continuous time random walk (CTRW) processes is that there are no interactions between the random walkers, such that we obtain the celebrated linear fractional equation either for the probability density function of the walker at a certain position and time, or the mean number of walkers. The question arises how one can extend this equation to the non-linear case, where the random walkers interact. The aim of this work is to take into account this interaction under a mean-field approximation where the statistical properties of the random walker depend on the mean number of walkers. The implementation of these non-linear effects within the CTRW integral equations or fractional equations poses difficulties, leading to the alternative methodology we present in this work. We are concerned with non-linear effects which may either inhibit anomalous effects or induce them where they otherwise would not arise. Inhibition of these effects corresponds to a decrease in the waiting times of the random walkers, be this due to overcrowding, competition between walkers or an inherent carrying capacity of the system. Conversely, induced anomalous effects present longer waiting times and are consistent with symbiotic, collaborative or social walkers, or indirect pinpointing of favourable regions by their attractiveness. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  10. An Empirical Comparison of Methods for Equating with Randomly Equivalent Groups of 50 to 400 Test Takers. Research Report. ETS RR-10-05

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Kim, Sooyeon

    2010-01-01

    A series of resampling studies investigated the accuracy of equating by four different methods in a random groups equating design with samples of 400, 200, 100, and 50 test takers taking each form. Six pairs of forms were constructed. Each pair was constructed by assigning items from an existing test taken by 9,000 or more test takers. The…

  11. Cavity master equation for the continuous time dynamics of discrete-spin models.

    PubMed

    Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  12. Cavity master equation for the continuous time dynamics of discrete-spin models

    NASA Astrophysics Data System (ADS)

    Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  13. Fractional Diffusion Processes: Probability Distributions and Continuous Time Random Walk

    NASA Astrophysics Data System (ADS)

    Gorenflo, R.; Mainardi, F.

    A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By the space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order alpha in (0,2] and skewness theta (\\verttheta\\vertlemin \\{alpha ,2-alpha \\}), and the first-order time derivative with a Caputo derivative of order beta in (0,1] . The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process. We view it as a generalized diffusion process that we call fractional diffusion process, and present an integral representation of the fundamental solution. A more general approach to anomalous diffusion is however known to be provided by the master equation for a continuous time random walk (CTRW). We show how this equation reduces to our fractional diffusion equation by a properly scaled passage to the limit of compressed waiting times and jump widths. Finally, we describe a method of simulation and display (via graphics) results of a few numerical case studies.

  14. Solving a mixture of many random linear equations by tensor decomposition and alternating minimization.

    DOT National Transportation Integrated Search

    2016-09-01

    We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...

  15. Equating Multidimensional Tests under a Random Groups Design: A Comparison of Various Equating Procedures

    ERIC Educational Resources Information Center

    Lee, Eunjung

    2013-01-01

    The purpose of this research was to compare the equating performance of various equating procedures for the multidimensional tests. To examine the various equating procedures, simulated data sets were used that were generated based on a multidimensional item response theory (MIRT) framework. Various equating procedures were examined, including…

  16. Outdoor Leaders' Emotional Intelligence and Transformational Leadership

    ERIC Educational Resources Information Center

    Hayashi, Aya; Ewert, Alan

    2006-01-01

    This study explored the concept of outdoor leadership from the perspectives of emotional intelligence and transformational leadership. Levels of emotional intelligence, multifactor leadership, outdoor experience, and social desirability were examined using 46 individuals designated as outdoor leaders. The results revealed a number of unique…

  17. 75 FR 67776 - Comment Request; Review of Productivity Statistics

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-03

    ... DEPARTMENT OF LABOR Bureau of Labor Statistics Comment Request; Review of Productivity Statistics... Statistics (BLS) is responsible for publishing measures of labor productivity and multifactor productivity..., Office of Productivity and Technology, Bureau of Labor Statistics, Room 2150, 2 Massachusetts Avenue, NE...

  18. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  19. Simulation of Stochastic Processes by Coupled ODE-PDE

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A document discusses the emergence of randomness in solutions of coupled, fully deterministic ODE-PDE (ordinary differential equations-partial differential equations) due to failure of the Lipschitz condition as a new phenomenon. It is possible to exploit the special properties of ordinary differential equations (represented by an arbitrarily chosen, dynamical system) coupled with the corresponding Liouville equations (used to describe the evolution of initial uncertainties in terms of joint probability distribution) in order to simulate stochastic processes with the proscribed probability distributions. The important advantage of the proposed approach is that the simulation does not require a random-number generator.

  20. Freak waves in random oceanic sea states.

    PubMed

    Onorato, M; Osborne, A R; Serio, M; Bertone, S

    2001-06-18

    Freak waves are very large, rare events in a random ocean wave train. Here we study their generation in a random sea state characterized by the Joint North Sea Wave Project spectrum. We assume, to cubic order in nonlinearity, that the wave dynamics are governed by the nonlinear Schrödinger (NLS) equation. We show from extensive numerical simulations of the NLS equation how freak waves in a random sea state are more likely to occur for large values of the Phillips parameter alpha and the enhancement coefficient gamma. Comparison with linear simulations is also reported.

  1. Universal shocks in the Wishart random-matrix ensemble.

    PubMed

    Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr

    2013-05-01

    We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.

  2. You Have What? Personality! Traits That Predict Leadership Styles for Elementary Administrators

    ERIC Educational Resources Information Center

    Garcia, Melinda

    2013-01-01

    This research explored relationships between followers' perceptions of elementary school principals' Big Five Personality Traits, using the "International Personality Item Pool" (IPIP) (Goldberg, 1999), and principals' Leadership Styles, using the "Multi-factor Leadership Questionnaire" (MLQ) (Bass & Avolio, 2004). A sample…

  3. Understanding the Supplemental Instruction Leader

    ERIC Educational Resources Information Center

    James, Adrian; Moore, Lori

    2018-01-01

    This article explored the learning styles and leadership styles of Supplemental Instruction (SI) leaders at Texas A&M University, and the impact of those preferences on recurring attendance to their sessions. The Learning Style Inventory, the Multifactor Leadership Questionnaire, and a demographic instrument were administered to SI leaders…

  4. Factorial Design: An Eight Factor Experiment Using Paper Helicopters

    NASA Technical Reports Server (NTRS)

    Kozma, Michael

    1996-01-01

    The goal of this paper is to present the analysis of the multi-factor experiment (factorial design) conducted in EG490, Junior Design at Loyola College in Maryland. The discussion of this paper concludes the experimental analysis and ties the individual class papers together.

  5. The Multiple Component Alternative for Gifted Education.

    ERIC Educational Resources Information Center

    Swassing, Ray

    1984-01-01

    The Multiple Component Model (MCM) of gifted education includes instruction which may overlap in literature, history, art, enrichment, languages, science, physics, math, music, and dance. The model rests on multifactored identification and requires systematic development and selection of components with ongoing feedback and evaluation. (CL)

  6. Multilinear Graph Embedding: Representation and Regularization for Images.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  7. A Simple and Computationally Efficient Sampling Approach to Covariate Adjustment for Multifactor Dimensionality Reduction Analysis of Epistasis

    PubMed Central

    Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.

    2010-01-01

    Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193

  8. A Comparative Study on Multifactor Dimensionality Reduction Methods for Detecting Gene-Gene Interactions with the Survival Phenotype

    PubMed Central

    Lee, Seungyeoun; Kim, Yongkang; Kwon, Min-Seok; Park, Taesung

    2015-01-01

    Genome-wide association studies (GWAS) have extensively analyzed single SNP effects on a wide variety of common and complex diseases and found many genetic variants associated with diseases. However, there is still a large portion of the genetic variants left unexplained. This missing heritability problem might be due to the analytical strategy that limits analyses to only single SNPs. One of possible approaches to the missing heritability problem is to consider identifying multi-SNP effects or gene-gene interactions. The multifactor dimensionality reduction method has been widely used to detect gene-gene interactions based on the constructive induction by classifying high-dimensional genotype combinations into one-dimensional variable with two attributes of high risk and low risk for the case-control study. Many modifications of MDR have been proposed and also extended to the survival phenotype. In this study, we propose several extensions of MDR for the survival phenotype and compare the proposed extensions with earlier MDR through comprehensive simulation studies. PMID:26339630

  9. A Study on the Assessment of Multi-Factors Affecting Urban Floods Using Satellite Image: A Case Study in Nakdong Basin, S. Korea

    NASA Astrophysics Data System (ADS)

    Kwak, Youngjoo; Kondoh, Akihiko

    2010-05-01

    Floods are also related to the changes in social economic conditions and land use. Recently, floods increased due to rapid urbanization and human activity in the lowland. Therefore, integrated management of total basin system is necessary to get the secure society. Typhoon ‘Rusa’ swept through eastern and southern parts of South Korea in the 2002. This pity experience gave us valuable knowledge that could be used to mitigate the future flood hazards. The purpose of this study is to construct the digital maps of the multi-factors related to urban flood concerning geomorphologic characteristics, land cover, and surface wetness. Parameters particularly consider geomorphologic functional unit, geomorphologic parameters derived from DEM (digital elevation model), and land use. The research area is Nakdong River Basin in S. Korea. As a result of preliminary analysis for Pusan area, the vulnerability map and the flood-prone areas can be extracted by applying spatial analysis on GIS (geographic information system).

  10. Cross-cultural comparisons of university students' science learning self-efficacy: structural relationships among factors within science learning self-efficacy

    NASA Astrophysics Data System (ADS)

    Wang, Ya-Ling; Liang, Jyh-Chong; Tsai, Chin-Chung

    2018-04-01

    Science learning self-efficacy could be regarded as a multi-factor belief which comprises different aspects such as cognitive skills, practical work, and everyday application. However, few studies have investigated the relationships among these factors that compose science learning self-efficacy. Also, culture may play an important role in explaining the relationships among these factors. Accordingly, this study aimed to investigate cultural differences in science learning self-efficacy and examine the relationships within factors constituting science learning self-efficacy by adopting a survey instrument for administration to students in the U.S. and Taiwan. A total of 218 university students (62.40% females) were surveyed in the U.S.A, and 224 university students (49.10% females) in Taiwan were also invited to take part in the study. The results of the structural equation modelling revealed cultural differences in the relationships among the factors of science learning self-efficacy. It was found that U.S. students' confidence in their ability to employ higher-order cognitive skills tended to promote their confidence in their ability to accomplish practical work, strengthening their academic self-efficacy. However, the aforementioned mediation was not found for the Taiwanese participants.

  11. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  12. Evaluation of the path integral for flow through random porous media

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.

    2018-04-01

    We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.

  13. The Effectiveness of Circular Equating as a Criterion for Evaluating Equating.

    ERIC Educational Resources Information Center

    Wang, Tianyou; Hanson, Bradley A.; Harris, Deborah J.

    Equating a test form to itself through a chain of equatings, commonly referred to as circular equating, has been widely used as a criterion to evaluate the adequacy of equating. This paper uses both analytical methods and simulation methods to show that this criterion is in general invalid in serving this purpose. For the random groups design done…

  14. Usable Multi-factor Authentication and Risk-based Authorization

    DTIC Science & Technology

    2015-06-01

    acceptance. In the previous section we described user studies that explored risks perceived by individuals using online banking and credit card purchases... iTunes purchases. We note that the fingerprint scanners in the current experiment are very different from what would be available in future. However

  15. Exploring the Relationships between Principals' Life Experiences and Transformational Leadership Behaviours

    ERIC Educational Resources Information Center

    Nash, Steve; Bangert, Art

    2014-01-01

    The primary objective of this research study was to explore the relationships between principals' life experiences and their transformational leadership behaviours. Over 212 public school principals completed both the lifetime leadership inventory (LLI) and the multifactor leadership questionnaire (MLQ). Exploratory and confirmatory factor…

  16. Obesity, hypertension and genetic variation in the TIGER Study

    USDA-ARS?s Scientific Manuscript database

    Obesity and hypertension are multifactoral conditions in which the onset and severity of the conditions are influenced by the interplay of genetic and environmental factors. We hypothesize that multiple genes and environmental factors account for a significant amount of variation in BMI and blood pr...

  17. Emotional Intelligence and the Career Choice Process.

    ERIC Educational Resources Information Center

    Emmerling, Robert J.; Cherniss, Cary

    2003-01-01

    Emotional intelligence as conceptualized by Mayer and Salovey consists of perceiving emotions, using emotions to facilitate thoughts, understanding emotions, and managing emotions to enhance personal growth. The Multifactor Emotional Intelligence Scale has proven a valid and reliable measure that can be used to explore the implications of…

  18. Is there a genetic solution to bovine respiratory disease complex?

    USDA-ARS?s Scientific Manuscript database

    Bovine respiratory disease complex (BRDC) is a complex multi-factor disease, which increases costs and reduces revenue from feedlot cattle. Multiple stressors and pathogens (viral and bacterial) have been implicated in the etiology of BRDC, therefore multiple approaches will be needed to evaluate a...

  19. Experimental and numerical analysis of the constitutive equation of rubber composites reinforced with random ceramic particle

    NASA Astrophysics Data System (ADS)

    Luo, D. M.; Xie, Y.; Su, X. R.; Zhou, Y. L.

    2018-01-01

    Based on the four classical models of Mooney-Rivlin (M-R), Yeoh, Ogden and Neo-Hookean (N-H) model, a strain energy constitutive equation with large deformation for rubber composites reinforced with random ceramic particles is proposed from the angle of continuum mechanics theory in this paper. By decoupling the interaction between matrix and random particles, the strain energy of each phase is obtained to derive the explicit constitutive equation for rubber composites. The tests results of uni-axial tensile, pure shear and equal bi-axial tensile are simulated by the non-linear finite element method on the ANSYS platform. The results from finite element method are compared with those from experiment, and the material parameters are determined by fitting the results from different test conditions, and the influence of radius of random ceramic particles on the effective mechanical properties are analyzed.

  20. A class of generalized Ginzburg-Landau equations with random switching

    NASA Astrophysics Data System (ADS)

    Wu, Zheng; Yin, George; Lei, Dongxia

    2018-09-01

    This paper focuses on a class of generalized Ginzburg-Landau equations with random switching. In our formulation, the nonlinear term is allowed to have higher polynomial growth rate than the usual cubic polynomials. The random switching is modeled by a continuous-time Markov chain with a finite state space. First, an explicit solution is obtained. Then properties such as stochastic-ultimate boundedness and permanence of the solution processes are investigated. Finally, two-time-scale models are examined leading to a reduction of complexity.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasnobaeva, L. A., E-mail: kla1983@mail.ru; Siberian State Medical University Moscowski Trakt 2, Tomsk, 634050; Shapovalov, A. V.

    Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on dynamics local conformational perturbations (kink) propagating along the DNA molecule is investigated. Such waves have an important role in the regulation of important biological processes in living systems at the molecular level. As a dynamic model of DNA was used a modified sine-Gordon equation, simulating the rotational oscillations of bases in one of the chains DNA. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the frameworkmore » of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker– Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum. Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on the kink dynamics is investigated in the sine–Gordon model. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the framework of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker–Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum.« less

  2. Occupational therapy practitioners' perceptions of rehabilitation managers' leadership styles and the outcomes of leadership.

    PubMed

    Jeff, Snodgrass; Douthitt, Shannon; Ellis, Rachel; Wade, Shelly; Plemons, Josh

    2008-01-01

    The purpose of this research was to serve as a pilot study to investigate the association between occupational therapy practitioners' perceptions of rehabilitation managers' leadership styles and the outcomes of leadership. Data for this study were collected using the Multifactor Leadership Questionnaire Form 5X and a self-designed demographic questionnaire. The study working sample included 73 occupational therapy practitioners. Major findings from the study indicate that overall, transformational, and transactional leadership styles are associated with leadership outcomes. Transformational leadership had a significant (p < 0.01) positive association with the leadership outcomes, whereas transactional leadership had a significant (p < 0.01) negative association with the leadership outcomes. The contingent reward leadership attribute (although belonging to the transactional leadership construct) was found to be positively associated with leadership outcomes, similar to the transformational leadership constructs. The results of this research suggest that transformational leadership styles have a positive association with leadership outcomes, whereas transactional leadership styles have a negative association, excluding the positive transactional contingent reward attribute. A larger, random sample is recommended as a follow-up study.

  3. Examining Dimensions of Self-Efficacy for Writing

    ERIC Educational Resources Information Center

    Bruning, Roger; Dempsey, Michael; Kauffman, Douglas F.; McKim, Courtney; Zumbrunn, Sharon

    2013-01-01

    A multifactor perspective on writing self-efficacy was examined in 2 studies. Three factors were proposed--self-efficacy for writing ideation, writing conventions, and writing self-regulation--and a scale constructed to reflect these factors. In Study 1, middle school students (N = 697) completed the Self-Efficacy for Writing Scale (SEWS), along…

  4. Do Leadership Styles Influence Organizational Health? A Study in Educational Organizations

    ERIC Educational Resources Information Center

    Toprak, Mustafa; Inandi, Bulent; Colak, Ahmet Levent

    2015-01-01

    This research aims to investigate the effect of leadership styles of school principals on organizational health. Causal-comparative research model was used to analyze the relationships between leadership types and organizational health. For data collection, a Likert type Multifactor Leadership scale questionnaire and Organizational Health scale…

  5. Transformational Leadership and the Leadership Performance of Oregon Secondary School Principals

    ERIC Educational Resources Information Center

    Breaker, Jason Lee

    2009-01-01

    A study of 118 secondary school principals in Oregon was conducted to examine the relationship of transformational leadership to secondary school principals' leadership performance. This study measured the transformational leadership of secondary school principals in Oregon using the "Multifactor Leadership Questionnaire (5X-Short)"…

  6. Appropriate Use Policy | High-Performance Computing | NREL

    Science.gov Websites

    users of the National Renewable Energy Laboratory (NREL) High Performance Computing (HPC) resources government agency, National Laboratory, University, or private entity, the intellectual property terms (if issued a multifactor token which may be a physical token or a virtual token used with one-time password

  7. The Negative Testing Effect and Multifactor Account

    ERIC Educational Resources Information Center

    Peterson, Daniel J.; Mulligan, Neil W.

    2013-01-01

    Across 3 experiments, we investigated the factors that dictate when taking a test improves subsequent memory performance (the "testing effect"). In Experiment 1, participants retrieving a set of targets during a retrieval practice phase ultimately recalled fewer of those targets compared with a group of participants who studied the…

  8. Worker Traits Training Unit. MA Handbook No. 314.

    ERIC Educational Resources Information Center

    Manpower Administration (DOL), Washington, DC.

    This training unit provides persons involved in employment interviewing, vocational counseling, curriculum planning, and other manpower activities with a multifactor approach for obtaining information from an individual and relating the data to job requirements. It is intended to result in the development of the bridge between client potential and…

  9. Organizational Deviance and Multi-Factor Leadership

    ERIC Educational Resources Information Center

    Aksu, Ali

    2016-01-01

    Organizational deviant behaviors can be defined as behaviors that have deviated from standards and uncongenial to organization's expectations. When such behaviors have been thought to damage the organization, it can be said that reducing the deviation behaviors at minimum level is necessary for a healthy organization. The aim of this research is…

  10. A Multifactor Approach to Research in Instructional Technology.

    ERIC Educational Resources Information Center

    Ragan, Tillman J.

    In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…

  11. Multivariate generalized multifactor dimensionality reduction to detect gene-gene interactions

    PubMed Central

    2013-01-01

    Background Recently, one of the greatest challenges in genome-wide association studies is to detect gene-gene and/or gene-environment interactions for common complex human diseases. Ritchie et al. (2001) proposed multifactor dimensionality reduction (MDR) method for interaction analysis. MDR is a combinatorial approach to reduce multi-locus genotypes into high-risk and low-risk groups. Although MDR has been widely used for case-control studies with binary phenotypes, several extensions have been proposed. One of these methods, a generalized MDR (GMDR) proposed by Lou et al. (2007), allows adjusting for covariates and applying to both dichotomous and continuous phenotypes. GMDR uses the residual score of a generalized linear model of phenotypes to assign either high-risk or low-risk group, while MDR uses the ratio of cases to controls. Methods In this study, we propose multivariate GMDR, an extension of GMDR for multivariate phenotypes. Jointly analysing correlated multivariate phenotypes may have more power to detect susceptible genes and gene-gene interactions. We construct generalized estimating equations (GEE) with multivariate phenotypes to extend generalized linear models. Using the score vectors from GEE we discriminate high-risk from low-risk groups. We applied the multivariate GMDR method to the blood pressure data of the 7,546 subjects from the Korean Association Resource study: systolic blood pressure (SBP) and diastolic blood pressure (DBP). We compare the results of multivariate GMDR for SBP and DBP to the results from separate univariate GMDR for SBP and DBP, respectively. We also applied the multivariate GMDR method to the repeatedly measured hypertension status from 5,466 subjects and compared its result with those of univariate GMDR at each time point. Results Results from the univariate GMDR and multivariate GMDR in two-locus model with both blood pressures and hypertension phenotypes indicate best combinations of SNPs whose interaction has significant association with risk for high blood pressures or hypertension. Although the test balanced accuracy (BA) of multivariate analysis was not always greater than that of univariate analysis, the multivariate BAs were more stable with smaller standard deviations. Conclusions In this study, we have developed multivariate GMDR method using GEE approach. It is useful to use multivariate GMDR with correlated multiple phenotypes of interests. PMID:24565370

  12. Vector solution for the mean electromagnetic fields in a layer of random particles

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Seker, S. S.; Levine, D. M.

    1986-01-01

    The mean electromagnetic fields are found in a layer of randomly oriented particles lying over a half space. A matrix-dyadic formulation of Maxwell's equations is employed in conjunction with the Foldy-Lax approximation to obtain equations for the mean fields. A two variable perturbation procedure, valid in the limit of small fractional volume, is then used to derive uncoupled equations for the slowly varying amplitudes of the mean wave. These equations are solved to obtain explicit expressions for the mean electromagnetic fields in the slab region in the general case of arbitrarily oriented particles and arbitrary polarization of the incident radiation. Numerical examples are given for the application to remote sensing of vegetation.

  13. A comparison of numerical solutions of partial differential equations with probabilistic and possibilistic parameters for the quantification of uncertainty in subsurface solute transport.

    PubMed

    Zhang, Kejiang; Achari, Gopal; Li, Hua

    2009-11-03

    Traditionally, uncertainty in parameters are represented as probabilistic distributions and incorporated into groundwater flow and contaminant transport models. With the advent of newer uncertainty theories, it is now understood that stochastic methods cannot properly represent non random uncertainties. In the groundwater flow and contaminant transport equations, uncertainty in some parameters may be random, whereas those of others may be non random. The objective of this paper is to develop a fuzzy-stochastic partial differential equation (FSPDE) model to simulate conditions where both random and non random uncertainties are involved in groundwater flow and solute transport. Three potential solution techniques namely, (a) transforming a probability distribution to a possibility distribution (Method I) then a FSPDE becomes a fuzzy partial differential equation (FPDE), (b) transforming a possibility distribution to a probability distribution (Method II) and then a FSPDE becomes a stochastic partial differential equation (SPDE), and (c) the combination of Monte Carlo methods and FPDE solution techniques (Method III) are proposed and compared. The effects of these three methods on the predictive results are investigated by using two case studies. The results show that the predictions obtained from Method II is a specific case of that got from Method I. When an exact probabilistic result is needed, Method II is suggested. As the loss or gain of information during a probability-possibility (or vice versa) transformation cannot be quantified, their influences on the predictive results is not known. Thus, Method III should probably be preferred for risk assessments.

  14. A new fundamental model of moving particle for reinterpreting Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umar, Muhamad Darwis

    2012-06-20

    The study of Schroedinger equation based on a hypothesis that every particle must move randomly in a quantum-sized volume has been done. In addition to random motion, every particle can do relative motion through the movement of its quantum-sized volume. On the other way these motions can coincide. In this proposed model, the random motion is one kind of intrinsic properties of the particle. The every change of both speed of randomly intrinsic motion and or the velocity of translational motion of a quantum-sized volume will represent a transition between two states, and the change of speed of randomly intrinsicmore » motion will generate diffusion process or Brownian motion perspectives. Diffusion process can take place in backward and forward processes and will represent a dissipative system. To derive Schroedinger equation from our hypothesis we use time operator introduced by Nelson. From a fundamental analysis, we find out that, naturally, we should view the means of Newton's Law F(vector sign) = ma(vector sign) as no an external force, but it is just to describe both the presence of intrinsic random motion and the change of the particle energy.« less

  15. First-Principles Modeling Of Electromagnetic Scattering By Discrete and Discretely Heterogeneous Random Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2016-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell- Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of the first principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies.

  16. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media.

    PubMed

    Mishchenko, Michael I; Dlugach, Janna M; Yurkin, Maxim A; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R Lee; Travis, Larry D; Yang, Ping; Zakharova, Nadezhda T

    2016-05-16

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ , or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell-Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of the first-principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies.

  17. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    PubMed Central

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2018-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development of the first-principles formalism enabling accurate calculations of monochromatic and quasi-monochromatic scattering by static and randomly varying multiparticle groups. We illustrate how this general framework can be coupled with state-of-the-art computer solvers of the Maxwell equations and applied to direct modeling of electromagnetic scattering by representative random multi-particle groups with arbitrary packing densities. This first-principles modeling yields general physical insights unavailable with phenomenological approaches. We discuss how the first-order-scattering approximation, the radiative transfer theory, and the theory of weak localization of electromagnetic waves can be derived as immediate corollaries of the Maxwell equations for very specific and well-defined kinds of particulate medium. These recent developments confirm the mesoscopic origin of the radiative transfer, weak localization, and effective-medium regimes and help evaluate the numerical accuracy of widely used approximate modeling methodologies. PMID:29657355

  18. Dual Dynamically Orthogonal approximation of incompressible Navier Stokes equations with random boundary conditions

    NASA Astrophysics Data System (ADS)

    Musharbash, Eleonora; Nobile, Fabio

    2018-02-01

    In this paper we propose a method for the strong imposition of random Dirichlet boundary conditions in the Dynamical Low Rank (DLR) approximation of parabolic PDEs and, in particular, incompressible Navier Stokes equations. We show that the DLR variational principle can be set in the constrained manifold of all S rank random fields with a prescribed value on the boundary, expressed in low rank format, with rank smaller then S. We characterize the tangent space to the constrained manifold by means of a Dual Dynamically Orthogonal (Dual DO) formulation, in which the stochastic modes are kept orthonormal and the deterministic modes satisfy suitable boundary conditions, consistent with the original problem. The Dual DO formulation is also convenient to include the incompressibility constraint, when dealing with incompressible Navier Stokes equations. We show the performance of the proposed Dual DO approximation on two numerical test cases: the classical benchmark of a laminar flow around a cylinder with random inflow velocity, and a biomedical application for simulating blood flow in realistic carotid artery reconstructed from MRI data with random inflow conditions coming from Doppler measurements.

  19. Chaotic gas turbine subject to augmented Lorenz equations.

    PubMed

    Cho, Kenichiro; Miyano, Takaya; Toriyama, Toshiyuki

    2012-09-01

    Inspired by the chaotic waterwheel invented by Malkus and Howard about 40 years ago, we have developed a gas turbine that randomly switches the sense of rotation between clockwise and counterclockwise. The nondimensionalized expressions for the equations of motion of our turbine are represented as a starlike network of many Lorenz subsystems sharing the angular velocity of the turbine rotor as the central node, referred to as augmented Lorenz equations. We show qualitative similarities between the statistical properties of the angular velocity of the turbine rotor and the velocity field of large-scale wind in turbulent Rayleigh-Bénard convection reported by Sreenivasan et al. [Phys. Rev. E 65, 056306 (2002)]. Our equations of motion achieve the random reversal of the turbine rotor through the stochastic resonance of the angular velocity in a double-well potential and the force applied by rapidly oscillating fields. These results suggest that the augmented Lorenz model is applicable as a dynamical model for the random reversal of turbulent large-scale wind through cessation.

  20. Interaction of the sonic boom with atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Rusak, Zvi; Cole, Julian D.

    1994-01-01

    Theoretical research was carried out to study the effect of free-stream turbulence on sonic boom pressure fields. A new transonic small-disturbance model to analyze the interactions of random disturbances with a weak shock was developed. The model equation has an extended form of the classic small-disturbance equation for unsteady transonic aerodynamics. An alternative approach shows that the pressure field may be described by an equation that has an extended form of the classic nonlinear acoustics equation that describes the propagation of sound beams with narrow angular spectrum. The model shows that diffraction effects, nonlinear steepening effects, focusing and caustic effects and random induced vorticity fluctuations interact simultaneously to determine the development of the shock wave in space and time and the pressure field behind it. A finite-difference algorithm to solve the mixed type elliptic-hyperbolic flows around the shock wave was also developed. Numerical calculations of shock wave interactions with various deterministic and random fluctuations will be presented in a future report.

  1. Macroscopic damping model for structural dynamics with random polycrystalline configurations

    NASA Astrophysics Data System (ADS)

    Yang, Yantao; Cui, Junzhi; Yu, Yifan; Xiang, Meizhen

    2018-06-01

    In this paper the macroscopic damping model for dynamical behavior of the structures with random polycrystalline configurations at micro-nano scales is established. First, the global motion equation of a crystal is decomposed into a set of motion equations with independent single degree of freedom (SDOF) along normal discrete modes, and then damping behavior is introduced into each SDOF motion. Through the interpolation of discrete modes, the continuous representation of damping effects for the crystal is obtained. Second, from energy conservation law the expression of the damping coefficient is derived, and the approximate formula of damping coefficient is given. Next, the continuous damping coefficient for polycrystalline cluster is expressed, the continuous dynamical equation with damping term is obtained, and then the concrete damping coefficients for a polycrystalline Cu sample are shown. Finally, by using statistical two-scale homogenization method, the macroscopic homogenized dynamical equation containing damping term for the structures with random polycrystalline configurations at micro-nano scales is set up.

  2. Dynamical behavior of the random field on the pulsating and snaking solitons in cubic-quintic complex Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Bakhtiar, Nurizatul Syarfinas Ahmad; Abdullah, Farah Aini; Hasan, Yahya Abu

    2017-08-01

    In this paper, we consider the dynamical behaviour of the random field on the pulsating and snaking solitons in a dissipative systems described by the one-dimensional cubic-quintic complex Ginzburg-Landau equation (cqCGLE). The dynamical behaviour of the random filed was simulated by adding a random field to the initial pulse. Then, we solve it numerically by fixing the initial amplitude profile for the pulsating and snaking solitons without losing any generality. In order to create the random field, we choose 0 ≤ ɛ ≤ 1.0. As a result, multiple soliton trains are formed when the random field is applied to a pulse like initial profile for the parameters of the pulsating and snaking solitons. The results also show the effects of varying the random field of the transient energy peaks in pulsating and snaking solitons.

  3. The Effects of Legumes on Metabolic Features, Insulin Resistance and Hepatic Function Tests in Women with Central Obesity: A Randomized Controlled Trial

    PubMed Central

    Alizadeh, Mohammad; Gharaaghaji, Rasool; Gargari, Bahram Pourghassem

    2014-01-01

    Background: The effect of high-legume hypocaloric diet on metabolic features in women is unclear. This study provided an opportunity to find effects of high-legume diet on metabolic features in women who consumed high legumes at pre-study period. Methods: In this randomized controlled trial after 2 weeks of a run-in period on an isocaloric diet, 42 premenopausal women with central obesity were randomly assigned into two groups: (1) Hypocaloric diet enriched in legumes (HDEL) and (2) hypocaloric diet without legumes (HDWL) for 6 weeks. The following variables were assessed before intervention and 3 and 6 weeks after its beginning: Waist circumference (WC), systolic blood pressure (SBP), diastolic blood pressure (DBP), fasting serum concentrations of triglyceride (TG), high density lipoprotein cholesterol, fasting blood sugar (FBS), insulin, homeostasis model of insulin resistance (HOMA-IR), alanine aminotransferase (ALT) and aspartate aminotransferase (AST). We used multifactor model of nested multivariate analysis of variance repeated measurements and t-test for statistical analysis. Results: HDEL and HDWL significantly reduced the WC. HDEL significantly reduced the SBP and TG. Both HDEL and HDWL significantly increased fasting concentration of insulin and HOMA-IR after 3 weeks, but their significant effects on insulin disappeared after 6 weeks and HDEL returned HOMA-IR to basal levels in the subsequent 3 weeks. In HDEL group percent of decrease in AST and ALT between 3rd and 6th weeks was significant. In HDWL group percent of increase in SBP, DBP, FBS and TG between 3rd and 6th weeks was significant. Conclusions: The study indicated beneficial effects of hypocaloric legumes on metabolic features. PMID:25013690

  4. A Multi-Factor Analysis of Job Satisfaction among School Nurses

    ERIC Educational Resources Information Center

    Foley, Marcia; Lee, Julie; Wilson, Lori; Cureton, Virginia Young; Canham, Daryl

    2004-01-01

    Although job satisfaction has been widely studied among registered nurses working in traditional health care settings, little is known about the job-related values and perceptions of nurses working in school systems. Job satisfaction is linked to lower levels of job-related stress, burnout, and career abandonment among nurses. This study evaluated…

  5. Investigating Teachers' Organizational Socialization Levels and Perceptions about Leadership Styles of Their Principals

    ERIC Educational Resources Information Center

    Kadi, Aysegül

    2015-01-01

    The purpose of this study is to investigate teachers' organizational socialization levels and perceptions about leadership styles of their principals. Research was conducted with 361 teachers. Research design is determined as survey and correlational. Multi-Factor Leadership Scale originally was developed by Bass (1999) and adapted to Turkish…

  6. Authentic Leadership--Is It More than Emotional Intelligence?

    ERIC Educational Resources Information Center

    Duncan, Phyllis; Green, Mark; Gergen, Esther; Ecung, Wenonah

    2017-01-01

    One of the newest theories to gain widespread interest is authentic leadership. Part of the rationale for developing a model and subsequent instrument to measure authentic leadership was a concern that the more popular theory, the full range model of leadership and its instrument, the Multifactor Leadership Questionnaire (MLQ) (Bass & Avolio,…

  7. Emotional Enhancement Effect of Memory: Removing the Influence of Cognitive Factors

    ERIC Educational Resources Information Center

    Sommer, Tobias; Glascher, Jan; Moritz, Steffen; Buchel, Christian

    2008-01-01

    According to the modulation hypothesis, arousal is the crucial factor in the emotional enhancement of memory (EEM). However, the multifactor theory of the EEM recently proposed that cognitive characteristics of emotional stimuli, e.g., relatedness and distinctiveness, also play an important role. The current study aimed to investigate the…

  8. An ecological classification system for the central hardwoods region: The Hoosier National Forest

    Treesearch

    James E. Van Kley; George R. Parker

    1993-01-01

    This study, a multifactor ecological classification system, using vegetation, soil characteristics, and physiography, was developed for the landscape of the Hoosier National Forest in Southern Indiana. Measurements of ground flora, saplings, and canopy trees from selected stands older than 80 years were subjected to TWINSPAN classification and DECORANA ordination....

  9. Multifactor Screener in the 2000 National Health Interview Survey Cancer Control Supplement: Scoring Procedures

    Cancer.gov

    Scoring procedures were developed to convert a respondent's screener responses to estimates of individual dietary intake for percentage energy from fat, grams of fiber, and servings of fruits and vegetables, using USDA's 1994-96 Continuing Survey of Food Intakes of Individuals (CSFII 94-96) dietary recall data.

  10. Technical Notes on the Multifactor Method of Elementary School Closing.

    ERIC Educational Resources Information Center

    Puleo, Vincent T.

    This report provides preliminary technical information on a method for analyzing the factors involved in the closing of elementary schools. Included is a presentation of data and a brief discussion bearing on descriptive statistics, reliability, and validity. An intercorrelation matrix is also examined. The method employs 9 factors that have a…

  11. Motivating Peak Performance: Leadership Behaviors That Stimulate Employee Motivation and Performance

    ERIC Educational Resources Information Center

    Webb, Kerry

    2007-01-01

    The impact of leader behaviors on motivation levels of employees was examined in this study. Two hundred twenty-three vice presidents and chief officers from 104 member colleges and universities in the Council for Christian Colleges and Universities were sampled. Leaders were administered the Multifactor Leadership Questionnaire (MLQ-rater…

  12. Faculty Member Perceptions of Academic Leadership Styles at Private Colleges

    ERIC Educational Resources Information Center

    Gidman, Lori Kathleen

    2013-01-01

    The leadership style of academic leaders was studied through the eyes of faculty members. This empirical study looked at faculty perceptions of academic leadership with the use of a numerical survey as the basis for observation. Faculty members at six private liberal arts institutions completed the Multifactor Leadership Questionnaire (MLQ) in…

  13. The Relationship between School Principals' Leadership Styles and Collective Teacher Efficacy

    ERIC Educational Resources Information Center

    Akan, Durdagi

    2013-01-01

    This study aims to determine the relationship between school administrators' leadership styles and the collective teacher efficacy based on teachers' perceptions. In line with this objective, the multifactor leadership style scale and the collective teacher efficacy scale were applied on 223 teachers who were working in the province of Erzurum.…

  14. Predicting plant species diversity in a longleaf pine landscape

    Treesearch

    L. Katherine Kirkman; P. Charles Goebel; Brian J. Palik; Larry T. West

    2004-01-01

    In this study, we used a hierarchical, multifactor ecological classification system to examine how spatial patterns of biodiversity develop in one of the most species-rich ecosystems in North America, the fire-maintained longleaf pine-wiregrass ecosystem and associated depressional wetlands and riparian forests. Our goal was to determine which landscape features are...

  15. Goal Oriented and Risk Taking Behavior: The Roles of Multiple Systems for Caucasian and Arab-American Adolescents

    ERIC Educational Resources Information Center

    Tynan, Joshua J.; Somers, Cheryl L.; Gleason, Jamie H.; Markman, Barry S.; Yoon, Jina

    2015-01-01

    With Bronfenbrenner's (1977) ecological theory and other multifactor models (e.g. Pianta, 1999; Prinstein, Boergers, & Spirito, 2001) underlying this study design, the purpose was to examine, simultaneously, key variables in multiple life contexts (microsystem, mesosystem, exosystem levels) for their individual and combined roles in predicting…

  16. A Study of Secondary School Principals' Leadership Styles and School Dropout Rates

    ERIC Educational Resources Information Center

    Baggerly-Hinojosa, Barbara

    2012-01-01

    This study examined the relationship between the leadership styles of secondary school principals, measured by the self-report "Multifactor Leadership Questionnaire 5X short" (Bass & Avolio, 2000) and the school's dropout rates, as reported by the Texas Education Agency in the Academic Excellence Indicator System (AEIS) report while…

  17. Can Multifactor Models of Teaching Improve Teacher Effectiveness Measures?

    ERIC Educational Resources Information Center

    Lazarev, Valeriy; Newman, Denis

    2014-01-01

    NCLB waiver requirements have led to development of teacher evaluation systems, in which student growth is a significant component. Recent empirical research has been focusing on metrics of student growth--value-added scores in particular--and their relationship to other metrics. An extensive set of recent teacher-evaluation studies conducted by…

  18. Estimating multi-factor cumulative watershed effects on fish populations with an individual-based model

    Treesearch

    Bret C. Harvey; Steven F. Railsback

    2007-01-01

    While the concept of cumulative effects is prominent in legislation governing environmental management, the ability to estimate cumulative effects remains limited. One reason for this limitation is that important natural resources such as fish populations may exhibit complex responses to changes in environmental conditions, particularly to alteration of multiple...

  19. Bureaucratic Abuse and the False Dichotomy between Intentional and Unintentional Child Injuries.

    ERIC Educational Resources Information Center

    Kotch, Jonathan B.; And Others

    This paper examines the arbitrary distinctions between intentional and unintentional child injuries, noting that a careful review of the literature of both child abuse and unintentional child injury revealed similarities among the risk factors associated with the two outcomes. A single, multifactor model of injury etiology, the ecologic model, is…

  20. Gravitational lensing by eigenvalue distributions of random matrix models

    NASA Astrophysics Data System (ADS)

    Martínez Alonso, Luis; Medina, Elena

    2018-05-01

    We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.

  1. Stochastic analysis of three-dimensional flow in a bounded domain

    USGS Publications Warehouse

    Naff, R.L.; Vecchia, A.V.

    1986-01-01

    A commonly accepted first-order approximation of the equation for steady state flow in a fully saturated spatially random medium has the form of Poisson's equation. This form allows for the advantageous use of Green's functions to solve for the random output (hydraulic heads) in terms of a convolution over the random input (the logarithm of hydraulic conductivity). A solution for steady state three- dimensional flow in an aquifer bounded above and below is presented; consideration of these boundaries is made possible by use of Green's functions to solve Poisson's equation. Within the bounded domain the medium hydraulic conductivity is assumed to be a second-order stationary random process as represented by a simple three-dimensional covariance function. Upper and lower boundaries are taken to be no-flow boundaries; the mean flow vector lies entirely in the horizontal dimensions. The resulting hydraulic head covariance function exhibits nonstationary effects resulting from the imposition of boundary conditions. Comparisons are made with existing infinite domain solutions.

  2. Two-time scale subordination in physical processes with long-term memory

    NASA Astrophysics Data System (ADS)

    Stanislavsky, Aleksander; Weron, Karina

    2008-03-01

    We describe dynamical processes in continuous media with a long-term memory. Our consideration is based on a stochastic subordination idea and concerns two physical examples in detail. First we study a temporal evolution of the species concentration in a trapping reaction in which a diffusing reactant is surrounded by a sea of randomly moving traps. The analysis uses the random-variable formalism of anomalous diffusive processes. We find that the empirical trapping-reaction law, according to which the reactant concentration decreases in time as a product of an exponential and a stretched exponential function, can be explained by a two-time scale subordination of random processes. Another example is connected with a state equation for continuous media with memory. If the pressure and the density of a medium are subordinated in two different random processes, then the ordinary state equation becomes fractional with two-time scales. This allows one to arrive at the Bagley-Torvik type of state equation.

  3. Fractional Stochastic Field Theory

    NASA Astrophysics Data System (ADS)

    Honkonen, Juha

    2018-02-01

    Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.

  4. Fluid limit of nonintegrable continuous-time random walks in terms of fractional differential equations.

    PubMed

    Sánchez, R; Carreras, B A; van Milligen, B Ph

    2005-01-01

    The fluid limit of a recently introduced family of nonintegrable (nonlinear) continuous-time random walks is derived in terms of fractional differential equations. In this limit, it is shown that the formalism allows for the modeling of the interaction between multiple transport mechanisms with not only disparate spatial scales but also different temporal scales. For this reason, the resulting fluid equations may find application in the study of a large number of nonlinear multiscale transport problems, ranging from the study of self-organized criticality to the modeling of turbulent transport in fluids and plasmas.

  5. Evolution of the concentration PDF in random environments modeled by global random walk

    NASA Astrophysics Data System (ADS)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.

  6. Parallel capillary-tube-based extension of thermoacoustic theory for random porous media.

    PubMed

    Roh, Heui-Seol; Raspet, Richard; Bass, Henry E

    2007-03-01

    Thermoacoustic theory is extended to stacks made of random bulk media. Characteristics of the porous stack such as the tortuosity and dynamic shape factors are introduced into the thermoacoustic wave equation in the low reduced frequency approximation. Basic thermoacoustic equations for a bulk porous medium are formulated analogously to the equations for a single pore. Use of different dynamic shape factors for the viscous and thermal effects is adopted and scaling using the dynamic shape factors and tortuosity is demonstrated. Comparisons of the calculated and experimentally derived thermoacoustic properties of reticulated vitreous carbon and aluminum foam show good agreement. A consistent mathematical model of sound propagation in a random porous medium with an imposed temperature is developed. This treatment leads to an expression for the coefficient of the temperature gradient in terms of scaled cylindrical thermoviscous functions.

  7. Mathematical nonlinear optics

    NASA Astrophysics Data System (ADS)

    McLaughlin, David W.

    1995-08-01

    The principal investigator, together with a post-doctoral fellows Tetsuji Ueda and Xiao Wang, several graduate students, and colleagues, has applied the modern mathematical theory of nonlinear waves to problems in nonlinear optics and to equations directly relevant to nonlinear optics. Projects included the interaction of laser light with nematic liquid crystals and chaotic, homoclinic, small dispersive, and random behavior of solutions of the nonlinear Schroedinger equation. In project 1, the extremely strong nonlinear response of a continuous wave laser beam in a nematic liquid crystal medium has produced striking undulation and filamentation of the laser beam which has been observed experimentally and explained theoretically. In project 2, qualitative properties of the nonlinear Schroedinger equation (which is the fundamental equation for nonlinear optics) have been identified and studied. These properties include optical shocking behavior in the limit of very small dispersion, chaotic and homoclinic behavior in discretizations of the partial differential equation, and random behavior.

  8. An Economical Analytical Equation for the Integrated Vertical Overlap of Cumulus and Stratus

    NASA Astrophysics Data System (ADS)

    Park, Sungsu

    2018-03-01

    By extending the previously proposed heuristic parameterization, the author derived an analytical equation computing the overlap areas between the precipitation (or radiation) areas and the cloud areas in a cloud system consisting of cumulus and stratus. The new analytical equation is accurate and much more efficient than the previous heuristic equation, which suffers from the truncation error in association with the digitalization of the overlap areas. Global test simulations with the new analytical formula in an offline mode showed that the maximum cumulus overlap simulates more surface precipitation flux than the random cumulus overlap. On the other hand, the maximum stratus overlap simulates less surface precipitation flux than random stratus overlap, which is due to the increase in the evaporation rate of convective precipitation from the random to maximum stratus overlap. The independent precipitation approximation (IPA) marginally decreases the surface precipitation flux, implying that IPA works well with other parameterizations. In contrast to the net production rate of precipitation and surface precipitation flux that increase when the cumulus and stratus are maximally and randomly overlapped, respectively, the global mean net radiative cooling and longwave cloud radiative forcing (LWCF) increase when the cumulus and stratus are randomly overlapped. On the global average, the vertical cloud overlap exerts larger impacts on the precipitation flux than on the radiation flux. The radiation scheme taking the subgrid variability of water vapor between the cloud and clear portions into account substantially increases the global mean LWCF in tropical deep convection and midlatitude storm track regions.

  9. A new method for reconstruction of solar irradiance

    NASA Astrophysics Data System (ADS)

    Privalsky, Victor

    2018-07-01

    The purpose of this research is to show how time series should be reconstructed using an example with the data on total solar irradiation (TSI) of the Earth and on sunspot numbers (SSN) since 1749. The traditional approach through regression equation(s) is designed for time-invariant vectors of random variables and is not applicable to time series, which present random functions of time. The autoregressive reconstruction (ARR) method suggested here requires fitting a multivariate stochastic difference equation to the target/proxy time series. The reconstruction is done through the scalar equation for the target time series with the white noise term excluded. The time series approach is shown to provide a better reconstruction of TSI than the correlation/regression method. A reconstruction criterion is introduced which allows one to define in advance the achievable level of success in the reconstruction. The conclusion is that time series, including the total solar irradiance, cannot be reconstructed properly if the data are not treated as sample records of random processes and analyzed in both time and frequency domains.

  10. Conference Attendance Patterns of Outdoor Orientation Program Staff at Four-Year Colleges in the United States

    ERIC Educational Resources Information Center

    Bell, Brent J.

    2009-01-01

    One purpose of professional conference attendance is to enhance social support. Intentionally fostering this support is an important political aim that should be developed. Although many multifactor definitions of social support exist (Cobb, 1979; Cohen & Syme, 1985; Kahn, 1979; Shaefer et al., 1981; Weiss, 1974), all distinguish between an…

  11. Revisiting a Cognitive Framework for Test Design: Applications for a Computerized Perceptual Speed Test.

    ERIC Educational Resources Information Center

    Alderton, David L.

    This paper highlights the need for a systematic, content aware, and theoretically-based approach to test design. The cognitive components approach is endorsed, and is applied to the development of a computerized perceptual speed test. Psychometric literature is reviewed and shows that: every major multi-factor theory includes a clerical/perceptual…

  12. Bullying in Adolescent Residential Care: The Influence of the Physical and Social Residential Care Environment

    ERIC Educational Resources Information Center

    Sekol, Ivana

    2016-01-01

    Background: To date, no study examined possible contributions of environmental factors to bullying and victimization in adolescent residential care facilities. Objective: By testing one part of the Multifactor Model of Bullying in Secure Setting (MMBSS; Ireland in "Int J Adolesc Med Health" 24(1):63-68, 2012), this research examined the…

  13. Academic Administrator Leadership Styles and the Impact on Faculty Job Satisfaction

    ERIC Educational Resources Information Center

    Bateh, Justin; Heyliger, Wilton

    2014-01-01

    This article examines the impact of three leadership styles as a predictor of job satisfaction in a state university system. The Multifactor Leadership Questionnaire was used to identify the leadership style of an administrator as perceived by faculty members. Spector's Job Satisfaction Survey was used to assess a faculty member's level of job…

  14. The Impact of Mentor Leadership Styles on First-Year Adult Student Retention

    ERIC Educational Resources Information Center

    Smith Staley, Charlesetta

    2012-01-01

    This quantitative study explored the leadership styles of mentors for retained first-year adult students to analyze whether the prevalent style had a higher impact on first-year adult student retention. The Multifactor Leadership Questionnaire (MLQ) 5x was used to collect data on the mentors' leadership styles from the perspective of retained…

  15. A Preliminary Study for a New Model of Sense of Community

    ERIC Educational Resources Information Center

    Tartaglia, Stefano

    2006-01-01

    Although Sense of Community (SOC) is usually defined as a multidimensional construct, most SOC scales are unidimensional. To reduce the split between theory and empirical research, the present work identifies a multifactor structure for the Italian Sense of Community Scale (ISCS) that has already been validated as a unitary index of SOC. This…

  16. Creativity in the Structure of Professionalism of a Higher School Teacher

    ERIC Educational Resources Information Center

    Gladilina, Irina Petrovna

    2016-01-01

    In the science, due to the absence of strict and exact criteria for differentiating between creative and non-creative activities of a human, there is no rather full definition of "creativity" notion despite that this matter was addressed by many scholars. Multifactor field in the science on creativity allows interpreting the essence of…

  17. Multifactor Screener in the 2000 National Health Interview Survey Cancer Control Supplement: Definition of Acceptable Dietary Data Values

    Cancer.gov

    We used the U.S. Department of Agriculture's (USDA) 1994-96 Continuing Survey of Food Intakes of Individuals (CSFII) data on reported intakes over two days of 24-hour recall to make judgments about reasonable frequencies of consumption that were reported on a per day basis.

  18. Evaluating the metagenome of two sampling locations in the nasal cavity of cattle with bovine respiratory disease complex

    USDA-ARS?s Scientific Manuscript database

    Bovine respiratory disease complex (BRDC) is a multi-factor disease, and disease incidence may be associated with an animal’s commensal microbiota (metagenome). Evaluation of the animal’s resident microbiota in the nasal cavity may help us to understand the impact of the metagenome on incidence of ...

  19. Evaluating the microbiome of two sampling locations in the nasal cavity of cattle with bovine respiratory disease complex (BRDC)

    USDA-ARS?s Scientific Manuscript database

    Bovine respiratory disease complex (BRDC) is a multi-factor disease, and disease incidence may be associated with an animal’s commensal microbiota (metagenome). Evaluation of the animal’s resident microbiota in the nasal cavity may help us to understand the impact of the metagenome on incidence of ...

  20. The Effects of Transformational Leadership and the Sense of Calling on Job Burnout among Special Education Teachers

    ERIC Educational Resources Information Center

    Gong, Tao; Zimmerli, Laurie; Hoffer, Harry E.

    2013-01-01

    This article examines the effects of transformational leadership of supervisors and the sense of calling on job burnout among special education teachers. A total of 256 special education teachers completed the Maslach Burnout Inventory and rated their supervisors on the Multifactor Leadership Questionnaire. The results reveal that transformational…

  1. Identifying the Best Buys in U.S. Higher Education

    ERIC Educational Resources Information Center

    Eff, E. Anthon; Klein, Christopher C.; Kyle, Reuben

    2012-01-01

    Which U.S. institutions of higher education offer the best value to consumers? To answer this question, we evaluate U.S. institutions relative to a data envelopment analysis (DEA) multi-factor frontier based on 2000-2001 data for 1,179 4-year institutions. The resulting DEA "best buy" scores allow the ranking of institutions by a…

  2. Inside The Zone of Proximal Development: Validating A Multifactor Model Of Learning Potential With Gifted Students And Their Peers

    ERIC Educational Resources Information Center

    Kanevsky, Lannie; Geake, John

    2004-01-01

    Kanevsky (1995b) proposed a model of learning potential based on Vygotsky?s notions of "good learning" and the zone of proximal development. This study investigated the contributions of general knowledge, information processing efficiency, and metacognition to differences in the learning potential of 5 gifted nongifted students.…

  3. Shape of a ponytail and the statistical physics of hair fiber bundles.

    PubMed

    Goldstein, Raymond E; Warren, Patrick B; Ball, Robin C

    2012-02-17

    A general continuum theory for the distribution of hairs in a bundle is developed, treating individual fibers as elastic filaments with random intrinsic curvatures. Applying this formalism to the iconic problem of the ponytail, the combined effects of bending elasticity, gravity, and orientational disorder are recast as a differential equation for the envelope of the bundle, in which the compressibility enters through an "equation of state." From this, we identify the balance of forces in various regions of the ponytail, extract a remarkably simple equation of state from laboratory measurements of human ponytails, and relate the pressure to the measured random curvatures of individual hairs.

  4. Improved estimation of random vibration loads in launch vehicles

    NASA Technical Reports Server (NTRS)

    Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.

    1993-01-01

    Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.

  5. Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn

    NASA Astrophysics Data System (ADS)

    Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han

    2018-06-01

    We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.

  6. Explicit equilibria in a kinetic model of gambling

    NASA Astrophysics Data System (ADS)

    Bassetti, F.; Toscani, G.

    2010-06-01

    We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.

  7. Advanced Numerical Methods for Computing Statistical Quantities of Interest from Solutions of SPDES

    DTIC Science & Technology

    2012-01-19

    and related optimization problems; developing numerical methods for option pricing problems in the presence of random arbitrage return. 1. Novel...equations (BSDEs) are connected to nonlinear partial differen- tial equations and non-linear semigroups, to the theory of hedging and pricing of contingent...the presence of random arbitrage return [3] We consider option pricing problems when we relax the condition of no arbitrage in the Black- Scholes

  8. Basis adaptation and domain decomposition for steady partial differential equations with random coefficients

    DOE PAGES

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    2017-09-04

    In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less

  9. Basis adaptation and domain decomposition for steady-state partial differential equations with random coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less

  10. Selection of Common Items as an Unrecognized Source of Variability in Test Equating: A Bootstrap Approximation Assuming Random Sampling of Common Items

    ERIC Educational Resources Information Center

    Michaelides, Michalis P.; Haertel, Edward H.

    2014-01-01

    The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…

  11. QCD-inspired spectra from Blue's functions

    NASA Astrophysics Data System (ADS)

    Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail

    1996-02-01

    We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.

  12. The Extended Parabolic Equation Method and Implication of Results for Atmospheric Millimeter-Wave and Optical Propagation

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    2004-01-01

    The extended wide-angle parabolic wave equation applied to electromagnetic wave propagation in random media is considered. A general operator equation is derived which gives the statistical moments of an electric field of a propagating wave. This expression is used to obtain the first and second order moments of the wave field and solutions are found that transcend those which incorporate the full paraxial approximation at the outset. Although these equations can be applied to any propagation scenario that satisfies the conditions of application of the extended parabolic wave equation, the example of propagation through atmospheric turbulence is used. It is shown that in the case of atmospheric wave propagation and under the Markov approximation (i.e., the -correlation of the fluctuations in the direction of propagation), the usual parabolic equation in the paraxial approximation is accurate even at millimeter wavelengths. The methodology developed here can be applied to any qualifying situation involving random propagation through turbid or plasma environments that can be represented by a spectral density of permittivity fluctuations.

  13. Diffusion in random networks: Asymptotic properties, and numerical and engineering approximations

    NASA Astrophysics Data System (ADS)

    Padrino, Juan C.; Zhang, Duan Z.

    2016-11-01

    The ensemble phase averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of a set of pockets connected by tortuous channels. Inside a channel, we assume that fluid transport is governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pores mass density. The so-called dual porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem, we consider the one-dimensional mass diffusion in a semi-infinite domain, whose solution is sought numerically. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt- 1 / 4 rather than xt- 1 / 2 as in the traditional theory. This early time sub-diffusive similarity can be explained by random walk theory through the network. In addition, by applying concepts of fractional calculus, we show that, for small time, the governing equation reduces to a fractional diffusion equation with known solution. We recast this solution in terms of special functions easier to compute. Comparison of the numerical and exact solutions shows excellent agreement.

  14. Heterogeneity in the Effect of Common Shocks on Healthcare Expenditure Growth.

    PubMed

    Hauck, Katharina; Zhang, Xiaohui

    2016-09-01

    Healthcare expenditure growth is affected by important unobserved common shocks such as technological innovation, changes in sociological factors, shifts in preferences, and the epidemiology of diseases. While common factors impact in principle all countries, their effect is likely to differ across countries. To allow for unobserved heterogeneity in the effects of common shocks, we estimate a panel data model of healthcare expenditure growth in 34 OECD countries over the years 1980 to 2012, where the usual fixed or random effects are replaced by a multifactor error structure. We address model uncertainty with Bayesian model averaging, to identify a small set of robust expenditure drivers from 43 potential candidates. We establish 16 significant drivers of healthcare expenditure growth, including growth in GDP per capita and in insurance premiums, changes in financing arrangements and some institutional characteristics, expenditures on pharmaceuticals, population ageing, costs of health administration, and inpatient care. Our approach allows us to provide robust evidence to policy makers on the drivers that were most strongly associated with the growth in healthcare expenditures over the past 32 years. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Adjunctive dental therapy via tooth plaque reduction and gingivitis treatment by blue light-emitting diodes tooth brushing

    NASA Astrophysics Data System (ADS)

    Genina, Elina A.; Titorenko, Vladimir A.; Belikov, Andrey V.; Bashkatov, Alexey N.; Tuchin, Valery V.

    2015-12-01

    The efficacy of blue light-emitting toothbrushes (B-LETBs) (405 to 420 nm, power density 2 mW/cm2) for reduction of dental plaques and gingival inflammation has been evaluated. Microbiological study has shown the multifactor therapeutic action of the B-LETBs on oral pathological microflora: in addition to partial mechanical removal of bacteria, photodynamic action suppresses them up to 97.5%. In the pilot clinical studies, subjects with mild to moderate gingivitis have been randomly divided into two groups: a treatment group that used the B-LETBs and a control group that used standard toothbrushes. Indices of plaque, gingival bleeding, and inflammation have been evaluated. A significant improvement of all dental indices in comparison with the baseline (by 59%, 66%, and 82% for plaque, gingival bleeding, and inflammation, respectively) has been found. The treatment group has demonstrated up to 50% improvement relative to the control group. We have proposed the B-LETBs to serve for prevention of gingivitis or as an alternative to conventional antibiotic treatment of this disease due to their effectiveness and the absence of drug side effects and bacterial resistance.

  16. Adjunctive dental therapy via tooth plaque reduction and gingivitis treatment by blue light-emitting diodes tooth brushing.

    PubMed

    Genina, Elina A; Titorenko, Vladimir A; Belikov, Andrey V; Bashkatov, Alexey N; Tuchin, Valery V

    2015-01-01

    The efficacy of blue light-emitting toothbrushes (B-LETBs) (405 to 420 nm, power density 2  mW/cm(2)) for reduction of dental plaques and gingival inflammation has been evaluated. Microbiological study has shown the multifactor therapeutic action of the B-LETBs on oral pathological microflora: in addition to partial mechanical removal of bacteria, photodynamic action suppresses them up to 97.5%. In the pilot clinical studies, subjects with mild to moderate gingivitis have been randomly divided into two groups: a treatment group that used the B-LETBs and a control group that used standard toothbrushes. Indices of plaque, gingival bleeding, and inflammation have been evaluated. A significant improvement of all dental indices in comparison with the baseline (by 59%, 66%, and 82% for plaque, gingival bleeding, and inflammation, respectively) has been found. The treatment group has demonstrated up to 50% improvement relative to the control group. We have proposed the B-LETBs to serve for prevention of gingivitis or as an alternative to conventional antibiotic treatment of this disease due to their effectiveness and the absence of drug side effects and bacterial resistance.

  17. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  18. Application of an Extended Parabolic Equation to the Calculation of the Mean Field and the Transverse and Longitudinal Mutual Coherence Functions Within Atmospheric Turbulence

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    2005-01-01

    Solutions are derived for the generalized mutual coherence function (MCF), i.e., the second order moment, of a random wave field propagating through a random medium within the context of the extended parabolic equation. Here, "generalized" connotes the consideration of both the transverse as well as the longitudinal second order moments (with respect to the direction of propagation). Such solutions will afford a comparison between the results of the parabolic equation within the pararaxial approximation and those of the wide-angle extended theory. To this end, a statistical operator method is developed which gives a general equation for an arbitrary spatial statistical moment of the wave field. The generality of the operator method allows one to obtain an expression for the second order field moment in the direction longitudinal to the direction of propagation. Analytical solutions to these equations are derived for the Kolmogorov and Tatarskii spectra of atmospheric permittivity fluctuations within the Markov approximation.

  19. Comparison of sigma(o) obtained from the conventional definition with sigma(o) appearing in the radar equation for randomly rough surfaces

    NASA Technical Reports Server (NTRS)

    Levine, D. M.

    1981-01-01

    A comparison is made of the radar cross section of rough surface calculated in one case from the conventional definition and obtained in the second case directly from the radar equation. The validity of the conventional definition representing the cross section appearing in the radar equation is determined. The analysis is executed in the special case of perfectly conducting, randomly corrugated surfaces in the physical optics limit. The radar equation is obtained by solving for the radiation scattered from an arbitrary source back to a colocated antenna. The signal out of the receiving antenna is computed from this solution and the result put into a form recognizeable as the radar equation. The conventional definition is obtained by solving a similar problem but for backscatter from an incident planewave. It is shown that these tow forms for sigma are the same if the observer is far enough from the surface.

  20. Bullying among Adolescents in North Cyprus and Turkey: Testing a Multifactor Model

    ERIC Educational Resources Information Center

    Bayraktar, Fatih

    2012-01-01

    Peer bullying has been studied since the 1970s. Therefore, a vast literature has accumulated about the various predictors of bullying. However, to date there has been no study which has combined individual-, peer-, parental-, teacher-, and school-related predictors of bullying within a model. In this sense, the main aim of this study was to test a…

  1. Electronic Health Records: Applying Diffusion of Innovation Theory to the Relationship between Multifactor Authentication and EHR Adoption

    ERIC Educational Resources Information Center

    Lockett, Daeron C.

    2014-01-01

    Electronic Health Record (EHR) systems are increasingly becoming accepted as future direction of medical record management systems. Programs such as the American Recovery and Reinvestment Act have provided incentives to hospitals that adopt EHR systems. In spite of these incentives, the perception of EHR adoption is that is has not achieved the…

  2. Cross-Cultural Comparisons of University Students' Science Learning Self-Efficacy: Structural Relationships among Factors within Science Learning Self-Efficacy

    ERIC Educational Resources Information Center

    Wang, Ya-Ling; Liang, Jyh-Chong; Tsai, Chin-Chung

    2018-01-01

    Science learning self-efficacy could be regarded as a multi-factor belief which comprises different aspects such as cognitive skills, practical work, and everyday application. However, few studies have investigated the relationships among these factors that compose science learning self-efficacy. Also, culture may play an important role in…

  3. [Risk factors in the living environment of early spontaneous abortion pregnant women].

    PubMed

    Liu, Xin-yan; Bian, Xu-ming; Han, Jing-xiu; Cao, Zhao-jin; Fan, Guang-sheng; Zhang, Chao; Zhang, Wen-li; Zhang, Shu-zhen; Sun, Xiao-guang

    2007-10-01

    To study the relationship between early spontaneous abortion and living environment, and explore the risk factors of spontaneous abortion. We conducted analysis based on the interview of 200 spontaneous abortion cases and the matched control (age +/- 2 years) by using multifactor Logistic regression analysis. The proportions of watching TV > or =10 hours/week, operating computer > or =45 hours/week, using copycat, microwave oven and mobile phone, electromagnetism equipment near the dwell or work place, e. g. switch room < or =50 m and launching tower < or =500 m in the cases are significantly higher than those in the controls in single factor analysis (all P < 0.05). After adjusted the effect of other risk factors by multifactor analysis, using microwave oven and mobile phone, contacting abnormal smell of fitment material > or =3 months, having emotional stress during the first term of pregnancy and spontaneous abortion history were significantly associated with risk of spontaneous abortion. The odds ratios of these risk factors were 2.23 and 4.63, respectively. Using microwave oven and mobile phone, contacting abnormal smell of fitment material > or =3 months, having emotional stress during the first term of pregnancy, and spontaneous abortion history are risk factors of early spontaneous abortion.

  4. Interactions between MAOA and SYP polymorphisms were associated with symptoms of attention-deficit/hyperactivity disorder in Chinese Han subjects.

    PubMed

    Gao, Qian; Liu, Lu; Li, Hai-Mei; Tang, Yi-Lang; Wu, Zhao-Min; Chen, Yun; Wang, Yu-Feng; Qian, Qiu-Jin

    2015-01-01

    As candidate genes of attention--deficit/hyperactivity disorder (ADHD), monoamine oxidase A (MAOA), and synaptophysin (SYP) are both on the X chromosome, and have been suggested to be associated with the predominantly inattentive subtype (ADHD-I). The present study is to investigate the potential gene-gene interaction (G × G) between rs5905859 of MAOA and rs5906754 of SYP for ADHD in Chinese Han subjects. For family-based association study, 177 female trios were included. For case-control study, 1,462 probands and 807 normal controls were recruited. The ADHD Rating Scale-IV (ADHD-RS-IV) was used to evaluate ADHD symptoms. Pedigree-based generalized multifactor dimensionality reduction (PGMDR) for female ADHD trios indicated significant gene interaction effect of rs5905859 and rs5906754. Generalized multifactor dimensionality reduction (GMDR) indicated potential gene-gene interplay on ADHD RS-IV scores in female ADHD-I. No associations were observed in male subjects in case-control analysis. In conclusion, our findings suggested that the interaction of MAOA and SYP may be involved in the genetic mechanism of ADHD-I subtype and predict ADHD symptoms. © 2014 Wiley Periodicals, Inc.

  5. Fuzzy comprehensive evaluation of multiple environmental factors for swine building assessment and control.

    PubMed

    Xie, Qiuju; Ni, Ji-Qin; Su, Zhongbin

    2017-10-15

    In confined swine buildings, temperature, humidity, and air quality are all important for animal health and productivity. However, the current swine building environmental control is only based on temperature; and evaluation and control methods based on multiple environmental factors are needed. In this paper, fuzzy comprehensive evaluation (FCE) theory was adopted for multi-factor assessment of environmental quality in two commercial swine buildings using real measurement data. An assessment index system and membership functions were established; and predetermined weights were given using analytic hierarchy process (AHP) combined with knowledge of experts. The results show that multi-factors such as temperature, humidity, and concentrations of ammonia (NH 3 ), carbon dioxide (CO 2 ), and hydrogen sulfide (H 2 S) can be successfully integrated in FCE for swine building environment assessment. The FCE method has a high correlation coefficient of 0.737 compared with the method of single-factor evaluation (SFE). The FCE method can significantly increase the sensitivity and perform an effective and integrative assessment. It can be used as part of environmental controlling and warning systems for swine building environment management to improve swine production and welfare. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Effects of Sulfate, Chloride, and Bicarbonate on Iron Stability in a PVC-U Drinking Pipe

    PubMed Central

    Wang, Jiaying; Tao, Tao; Yan, Hexiang

    2017-01-01

    In order to describe iron stability in plastic pipes and to ensure the drinking water security, the influence factors and rules for iron adsorption and release were studied, dependent on the Unplasticized poly (vinyl chloride) (PVC-U) drinking pipes employed in this research. In this paper, sulfate, chloride, and bicarbonate, as well as synthesized models, were chosen to investigate the iron stability on the inner wall of PVC-U drinking pipes. The existence of the three kinds of anions could significantly affect the process of iron adsorption, and a positive association was found between the level of anion concentration and the adsorption rate. However, the scaling formed on the inner surface of the pipes would be released into the water under certain conditions. The Larson Index (LI), used for a synthetic consideration of anion effects on iron stability, was selected to investigate the iron release under multi-factor conditions. Moreover, a well fitted linear model was established to gain a better understanding of iron release under multi-factor conditions. The simulation results demonstrated that the linear model was better fitted than the LI model for the prediction of iron release. PMID:28629192

  7. Fluctuating Navier-Stokes equations for inelastic hard spheres or disks.

    PubMed

    Brey, J Javier; Maynar, P; de Soria, M I García

    2011-04-01

    Starting from the fluctuating Boltzmann equation for smooth inelastic hard spheres or disks, closed equations for the fluctuating hydrodynamic fields to Navier-Stokes order are derived. This requires deriving constitutive relations for both the fluctuating fluxes and the correlations of the random forces. The former are identified as having the same form as the macroscopic average fluxes and involving the same transport coefficients. On the other hand, the random force terms exhibit two peculiarities as compared with their elastic limit for molecular systems. First, they are not white but have some finite relaxation time. Second, their amplitude is not determined by the macroscopic transport coefficients but involves new coefficients. ©2011 American Physical Society

  8. Energy dissipation in a friction-controlled slide of a body excited by random motions of the foundation

    NASA Astrophysics Data System (ADS)

    Berezin, Sergey; Zayats, Oleg

    2018-01-01

    We study a friction-controlled slide of a body excited by random motions of the foundation it is placed on. Specifically, we are interested in such quantities as displacement, traveled distance, and energy loss due to friction. We assume that the random excitation is switched off at some time (possibly infinite) and show that the problem can be treated in an analytic, explicit, manner. Particularly, we derive formulas for the moments of the displacement and distance, and also for the average energy loss. To accomplish that we use the Pugachev-Sveshnikov equation for the characteristic function of a continuous random process given by a system of SDEs. This equation is solved by reduction to a parametric Riemann boundary value problem of complex analysis.

  9. Nonlocal operators, parabolic-type equations, and ultrametric random walks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacón-Cortes, L. F., E-mail: fchaconc@math.cinvestav.edu.mx; Zúñiga-Galindo, W. A., E-mail: wazuniga@math.cinvestav.edu.mx

    2013-11-15

    In this article, we introduce a new type of nonlocal operators and study the Cauchy problem for certain parabolic-type pseudodifferential equations naturally associated to these operators. Some of these equations are the p-adic master equations of certain models of complex systems introduced by Avetisov, V. A. and Bikulov, A. Kh., “On the ultrametricity of the fluctuation dynamicmobility of protein molecules,” Proc. Steklov Inst. Math. 265(1), 75–81 (2009) [Tr. Mat. Inst. Steklova 265, 82–89 (2009) (Izbrannye Voprosy Matematicheskoy Fiziki i p-adicheskogo Analiza) (in Russian)]; Avetisov, V. A., Bikulov, A. Kh., and Zubarev, A. P., “First passage time distribution and the numbermore » of returns for ultrametric random walks,” J. Phys. A 42(8), 085003 (2009); Avetisov, V. A., Bikulov, A. Kh., and Osipov, V. A., “p-adic models of ultrametric diffusion in the conformational dynamics of macromolecules,” Proc. Steklov Inst. Math. 245(2), 48–57 (2004) [Tr. Mat. Inst. Steklova 245, 55–64 (2004) (Izbrannye Voprosy Matematicheskoy Fiziki i p-adicheskogo Analiza) (in Russian)]; Avetisov, V. A., Bikulov, A. Kh., and Osipov, V. A., “p-adic description of characteristic relaxation in complex systems,” J. Phys. A 36(15), 4239–4246 (2003); Avetisov, V. A., Bikulov, A. H., Kozyrev, S. V., and Osipov, V. A., “p-adic models of ultrametric diffusion constrained by hierarchical energy landscapes,” J. Phys. A 35(2), 177–189 (2002); Avetisov, V. A., Bikulov, A. Kh., and Kozyrev, S. V., “Description of logarithmic relaxation by a model of a hierarchical random walk,” Dokl. Akad. Nauk 368(2), 164–167 (1999) (in Russian). The fundamental solutions of these parabolic-type equations are transition functions of random walks on the n-dimensional vector space over the field of p-adic numbers. We study some properties of these random walks, including the first passage time.« less

  10. Random deflections of a string on an elastic foundation.

    NASA Technical Reports Server (NTRS)

    Sanders, J. L., Jr.

    1972-01-01

    The paper is concerned with the problem of a taut string on a random elastic foundation subjected to random loads. The boundary value problem is transformed into an initial value problem by the method of invariant imbedding. Fokker-Planck equations for the random initial value problem are formulated and solved in some special cases. The analysis leads to a complete characterization of the random deflection function.

  11. Parabolic equation for nonlinear acoustic wave propagation in inhomogeneous moving media

    NASA Astrophysics Data System (ADS)

    Aver'yanov, M. V.; Khokhlova, V. A.; Sapozhnikov, O. A.; Blanc-Benon, Ph.; Cleveland, R. O.

    2006-12-01

    A new parabolic equation is derived to describe the propagation of nonlinear sound waves in inhomogeneous moving media. The equation accounts for diffraction, nonlinearity, absorption, scalar inhomogeneities (density and sound speed), and vectorial inhomogeneities (flow). A numerical algorithm employed earlier to solve the KZK equation is adapted to this more general case. A two-dimensional version of the algorithm is used to investigate the propagation of nonlinear periodic waves in media with random inhomogeneities. For the case of scalar inhomogeneities, including the case of a flow parallel to the wave propagation direction, a complex acoustic field structure with multiple caustics is obtained. Inclusion of the transverse component of vectorial random inhomogeneities has little effect on the acoustic field. However, when a uniform transverse flow is present, the field structure is shifted without changing its morphology. The impact of nonlinearity is twofold: it produces strong shock waves in focal regions, while, outside the caustics, it produces higher harmonics without any shocks. When the intensity is averaged across the beam propagating through a random medium, it evolves similarly to the intensity of a plane nonlinear wave, indicating that the transverse redistribution of acoustic energy gives no considerable contribution to nonlinear absorption.

  12. The Effects of Schema-Broadening Instruction on Second Graders’ Word-Problem Performance and Their Ability to Represent Word Problems with Algebraic Equations: A Randomized Control Study

    PubMed Central

    Fuchs, Lynn S.; Zumeta, Rebecca O.; Schumacher, Robin Finelli; Powell, Sarah R.; Seethaler, Pamela M.; Hamlett, Carol L.; Fuchs, Douglas

    2010-01-01

    The purpose of this study was to assess the effects of schema-broadening instruction (SBI) on second graders’ word-problem-solving skills and their ability to represent the structure of word problems using algebraic equations. Teachers (n = 18) were randomly assigned to conventional word-problem instruction or SBI word-problem instruction, which taught students to represent the structural, defining features of word problems with overarching equations. Intervention lasted 16 weeks. We pretested and posttested 270 students on measures of word-problem skill; analyses that accounted for the nested structure of the data indicated superior word-problem learning for SBI students. Descriptive analyses of students’ word-problem work indicated that SBI helped students represent the structure of word problems with algebraic equations, suggesting that SBI promoted this aspect of students’ emerging algebraic reasoning. PMID:20539822

  13. The Correlation between Leadership Style and Leader Power

    DTIC Science & Technology

    2016-04-22

    Article 3. DATES COVERED (From - To) 1 February 2015-31 October 2015 4. TITLE AND SUBTITLE The Correlation between Leadership Style and Leader Power...Transformational and Transactional leadership style and leader power. Leadership style was measured by the Multifactor Leadership Questionnaire (MLQ...between the factors representing Leadership Style and Leader Power. The CFA results are contrary to developer’s theories of both scales, but are

  14. The Development of a Tactical-Level Full Range Leadership Measurement Instrument

    DTIC Science & Technology

    2010-03-01

    full range leadership theory has become established as the predominant and most widely researched theory on leadership . The most commonly used survey...instrument to assess full range leadership theory is the Multifactor Leadership Questionnaire, originally developed by Bass in 1985. Although much...existing literature to develop a new full range leadership theory measurement instrument that effectively targets low- to mid-level supervisors, or

  15. Nonlinear subdiffusive fractional equations and the aggregation phenomenon.

    PubMed

    Fedotov, Sergei

    2013-09-01

    In this article we address the problem of the nonlinear interaction of subdiffusive particles. We introduce the random walk model in which statistical characteristics of a random walker such as escape rate and jump distribution depend on the mean density of particles. We derive a set of nonlinear subdiffusive fractional master equations and consider their diffusion approximations. We show that these equations describe the transition from an intermediate subdiffusive regime to asymptotically normal advection-diffusion transport regime. This transition is governed by nonlinear tempering parameter that generalizes the standard linear tempering. We illustrate the general results through the use of the examples from cell and population biology. We find that a nonuniform anomalous exponent has a strong influence on the aggregation phenomenon.

  16. Exact Lyapunov exponent of the harmonic magnon modes of one-dimensional Heisenberg-Mattis spin glasses

    NASA Astrophysics Data System (ADS)

    Sepehrinia, Reza; Niry, M. D.; Bozorg, B.; Tabar, M. Reza Rahimi; Sahimi, Muhammad

    2008-03-01

    A mapping is developed between the linearized equation of motion for the dynamics of the transverse modes at T=0 of the Heisenberg-Mattis model of one-dimensional (1D) spin glasses and the (discretized) random wave equation. The mapping is used to derive an exact expression for the Lyapunov exponent (LE) of the magnon modes of spin glasses and to show that it follows anomalous scaling at low magnon frequencies. In addition, through numerical simulations, the differences between the LE and the density of states of the wave equation in a discrete 1D model of randomly disordered media (those with a finite correlation length) and that of continuous media (with a zero correlation length) are demonstrated and emphasized.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  18. Stochastic theory of polarized light in nonlinear birefringent media: An application to optical rotation

    NASA Astrophysics Data System (ADS)

    Tsuchida, Satoshi; Kuratsuji, Hiroshi

    2018-05-01

    A stochastic theory is developed for the light transmitting the optical media exhibiting linear and nonlinear birefringence. The starting point is the two-component nonlinear Schrödinger equation (NLSE). On the basis of the ansatz of “soliton” solution for the NLSE, the evolution equation for the Stokes parameters is derived, which turns out to be the Langevin equation by taking account of randomness and dissipation inherent in the birefringent media. The Langevin equation is converted to the Fokker-Planck (FP) equation for the probability distribution by employing the technique of functional integral on the assumption of the Gaussian white noise for the random fluctuation. The specific application is considered for the optical rotation, which is described by the ellipticity (third component of the Stokes parameters) alone: (i) The asymptotic analysis is given for the functional integral, which leads to the transition rate on the Poincaré sphere. (ii) The FP equation is analyzed in the strong coupling approximation, by which the diffusive behavior is obtained for the linear and nonlinear birefringence. These would provide with a basis of statistical analysis for the polarization phenomena in nonlinear birefringent media.

  19. Improving Models of Photosynthetic Thermal Acclimation: Which Parameters are Most Important and How Many Should Be Modified?

    NASA Astrophysics Data System (ADS)

    Stinziano, J. R.; Way, D.; Bauerle, W.

    2017-12-01

    Photosynthetic temperature acclimation could strongly affect coupled vegetation-atmosphere feedbacks in the global carbon cycle, especially as the climate warms. Thermal acclimation of photosynthesis can be modelled as changes in the parameters describing the direct effect of temperature on photosynthetic capacity (activation energy, Ea; deactivation energy, Hd; entropy parameter, ΔS) or the basal value of photosynthetic capacity (i.e. photosynthetic capacity measured at 25 °C), however the impact of acclimating these parameters (individually or in combination) on vegetative carbon gain is relatively unexplored. Here we compare the ability of 66 photosynthetic temperature acclimation scenarios to improve predictions of a spatially explicit canopy carbon flux model, MAESTRA, for eddy covariance data from a loblolly pine forest. We show that: 1) incorporating seasonal temperature acclimation of basal photosynthetic capacity improves the model's ability to capture seasonal changes in carbon fluxes; 2) multifactor scenarios of photosynthetic temperature acclimation provide minimal (if any) improvement in model performance over single factor acclimation scenarios; 3) acclimation of enzyme activation energies should be restricted to the temperature ranges of the data from which the equations are derived; and 4) model performance is strongly affected by the choice of deactivation energy. We suggest that a renewed effort be made into understanding the thermal acclimation of enzyme activation and deactivation energies across broad temperature ranges to better understand the mechanisms underlying thermal photosynthetic acclimation.

  20. Vocabulary development in children with hearing loss: the role of child, family, and educational variables.

    PubMed

    Coppens, Karien M; Tellings, Agnes; van der Veld, William; Schreuder, Robert; Verhoeven, Ludo

    2012-01-01

    In the present study we examined the effect of hearing status on reading vocabulary development. More specifically, we examined the change of lexical competence in children with hearing loss over grade 4-7 and the predictors of this change. Therefore, we used a multi-factor longitudinal design with multiple outcomes, measuring the reading vocabulary knowledge in children with hearing loss from grades 4 and 5, and of children without hearing loss from grade 4, for 3 years with two word tasks: a lexical decision task and a use decision task. With these tasks we measured word form recognition and (in)correct usage recognition, respectively. A GLM repeated measures procedure indicated that scores and growth rates on the two tasks were affected by hearing status. Moreover, with structural equation modeling we observed that the development of lexical competence in children with hearing loss is stable over time, and a child's lexical competence can be explained best by his or her lexical competence assessed on a previous measurement occasion. If you look back, differences in lexical competence among children with hearing loss stay unfortunately the same. Educational placement, use of sign language at home, intelligence, use of hearing devices, and onset of deafness can account for the differences among children with hearing loss. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Random attractor of non-autonomous stochastic Boussinesq lattice system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  2. Hidden Statistics of Schroedinger Equation

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2011-01-01

    Work was carried out in determination of the mathematical origin of randomness in quantum mechanics and creating a hidden statistics of Schr dinger equation; i.e., to expose the transitional stochastic process as a "bridge" to the quantum world. The governing equations of hidden statistics would preserve such properties of quantum physics as superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods.

  3. Calibration of d.b.h.-height equations for southern hardwoods

    Treesearch

    Thomas B. Lynch; A. Gordon Holley; Douglas J. Stevenson

    2006-01-01

    Data from southern hardwood stands in East Texas were used to estimate parameters for d.b.h.-height equations. Mixed model estimation methods were used, so that the stand from which a tree was sampled was considered a random effect. This makes it possible to calibrate these equations using data collected in a local stand of interest, by using d.b.h. and total height...

  4. Pathwise upper semi-continuity of random pullback attractors along the time axis

    NASA Astrophysics Data System (ADS)

    Cui, Hongyong; Kloeden, Peter E.; Wu, Fuke

    2018-07-01

    The pullback attractor of a non-autonomous random dynamical system is a time-indexed family of random sets, typically having the form {At(ṡ) } t ∈ R with each At(ṡ) a random set. This paper is concerned with the nature of such time-dependence. It is shown that the upper semi-continuity of the mapping t ↦At(ω) for each ω fixed has an equivalence relationship with the uniform compactness of the local union ∪s∈IAs(ω) , where I ⊂ R is compact. Applied to a semi-linear degenerate parabolic equation with additive noise and a wave equation with multiplicative noise we show that, in order to prove the above locally uniform compactness and upper semi-continuity, no additional conditions are required, in which sense the two properties appear to be general properties satisfied by a large number of real models.

  5. Dynamic stability of spinning pretwisted beams subjected to axial random forces

    NASA Astrophysics Data System (ADS)

    Young, T. H.; Gau, C. Y.

    2003-11-01

    This paper studies the dynamic stability of a pretwisted cantilever beam spinning along its longitudinal axis and subjected to an axial random force at the free end. The axial force is assumed as the sum of a constant force and a random process with a zero mean. Due to this axial force, the beam may experience parametric random instability. In this work, the finite element method is first applied to yield discretized system equations. The stochastic averaging method is then adopted to obtain Ito's equations for the response amplitudes of the system. Finally the mean-square stability criterion is utilized to determine the stability condition of the system. Numerical results show that the stability boundary of the system converges as the first three modes are taken into calculation. Before the convergence is reached, the stability condition predicted is not conservative enough.

  6. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  7. Method of model reduction and multifidelity models for solute transport in random layered porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersionmore » initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.« less

  8. Regularization of the big bang singularity with random perturbations

    NASA Astrophysics Data System (ADS)

    Belbruno, Edward; Xue, BingKan

    2018-03-01

    We show how to regularize the big bang singularity in the presence of random perturbations modeled by Brownian motion using stochastic methods. We prove that the physical variables in a contracting universe dominated by a scalar field can be continuously and uniquely extended through the big bang as a function of time to an expanding universe only for a discrete set of values of the equation of state satisfying special co-prime number conditions. This result significantly generalizes a previous result (Xue and Belbruno 2014 Class. Quantum Grav. 31 165002) that did not model random perturbations. This result implies that the extension from a contracting to an expanding universe for the discrete set of co-prime equation of state is robust, which is a surprising result. Implications for a purely expanding universe are discussed, such as a non-smooth, randomly varying scale factor near the big bang.

  9. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  10. Modeling and statistical analysis of non-Gaussian random fields with heavy-tailed distributions.

    PubMed

    Nezhadhaghighi, Mohsen Ghasemi; Nakhlband, Abbas

    2017-04-01

    In this paper, we investigate and develop an alternative approach to the numerical analysis and characterization of random fluctuations with the heavy-tailed probability distribution function (PDF), such as turbulent heat flow and solar flare fluctuations. We identify the heavy-tailed random fluctuations based on the scaling properties of the tail exponent of the PDF, power-law growth of qth order correlation function, and the self-similar properties of the contour lines in two-dimensional random fields. Moreover, this work leads to a substitution for the fractional Edwards-Wilkinson (EW) equation that works in the presence of μ-stable Lévy noise. Our proposed model explains the configuration dynamics of the systems with heavy-tailed correlated random fluctuations. We also present an alternative solution to the fractional EW equation in the presence of μ-stable Lévy noise in the steady state, which is implemented numerically, using the μ-stable fractional Lévy motion. Based on the analysis of the self-similar properties of contour loops, we numerically show that the scaling properties of contour loop ensembles can qualitatively and quantitatively distinguish non-Gaussian random fields from Gaussian random fluctuations.

  11. Light-Induced Changes of the Circadian Clock of Humans: Increasing Duration is More Effective than Increasing Light Intensity

    PubMed Central

    Dewan, Karuna; Benloucif, Susan; Reid, Kathryn; Wolfe, Lisa F.; Zee, Phyllis C.

    2011-01-01

    Study Objectives: To evaluate the effect of increasing the intensity and/or duration of exposure on light-induced changes in the timing of the circadian clock of humans. Design: Multifactorial randomized controlled trial, between and within subject design Setting: General Clinical Research Center (GCRC) of an academic medical center Participants: 56 healthy young subjects (20-40 years of age) Interventions: Research subjects were admitted for 2 independent stays of 4 nights/3 days for treatment with bright or dim-light (randomized order) at a time known to induce phase delays in circadian timing. The intensity and duration of the bright light were determined by random assignment to one of 9 treatment conditions (duration of 1, 2, or 3 hours at 2000, 4000, or 8000 lux). Measurements and Results: Treatment-induced changes in the dim light melatonin onset (DLMO) and dim light melatonin offset (DLMOff) were measured from blood samples collected every 20-30 min throughout baseline and post-treatment nights. Comparison by multi-factor analysis of variance (ANOVA) of light-induced changes in the time of the circadian melatonin rhythm for the 9 conditions revealed that changing the duration of the light exposure from 1 to 3 h increased the magnitude of light-induced delays. In contrast, increasing from moderate (2,000 lux) to high (8,000 lux) intensity light did not alter the magnitude of phase delays of the circadian melatonin rhythm. Conclusions: Results from the present study suggest that for phototherapy of circadian rhythm sleep disorders in humans, a longer period of moderate intensity light may be more effective than a shorter exposure period of high intensity light. Citation: Dewan K; Benloucif S; Reid K; Wolfe LF; Zee PC. Light-induced changes of the circadian clock of humans: increasing duration is more effective than increasing light intensity. SLEEP 2011;34(5):593-599. PMID:21532952

  12. Maximum entropy and equations of state for random cellular structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivier, N.

    Random, space-filling cellular structures (biological tissues, metallurgical grain aggregates, foams, etc.) are investigated. Maximum entropy inference under a few constraints yields structural equations of state, relating the size of cells to their topological shape. These relations are known empirically as Lewis's law in Botany, or Desch's relation in Metallurgy. Here, the functional form of the constraints is now known as a priori, and one takes advantage of this arbitrariness to increase the entropy further. The resulting structural equations of state are independent of priors, they are measurable experimentally and constitute therefore a direct test for the applicability of MaxEnt inferencemore » (given that the structure is in statistical equilibrium, a fact which can be tested by another simple relation (Aboav's law)). 23 refs., 2 figs., 1 tab.« less

  13. Modern methods for the quality management of high-rate melt solidification

    NASA Astrophysics Data System (ADS)

    Vasiliev, V. A.; Odinokov, S. A.; Serov, M. M.

    2016-12-01

    The quality management of high-rate melt solidification needs combined solution obtained by methods and approaches adapted to a certain situation. Technological audit is recommended to estimate the possibilities of the process. Statistical methods are proposed with the choice of key parameters. Numerical methods, which can be used to perform simulation under multifactor technological conditions, and an increase in the quality of decisions are of particular importance.

  14. An Evaluation of the Relationship between Supervisory Techniques and Organizational Outcomes among the Supervisors in the Agricultural Extension Service in the Eastern Region Districts of Uganda. Summary of Research 81.

    ERIC Educational Resources Information Center

    Padde, Paul; And Others

    A descriptive study examined the relationship between supervisory techniques and organizational outcomes among supervisors in the agricultural extension service in eight districts in eastern Uganda. Self-rating and rater forms of the Multifactor Leadership Questionnaire were sent to 220 extension agents, 8 field supervisors, and 8 deputy field…

  15. Jackknife for Variance Analysis of Multifactor Experiments.

    DTIC Science & Technology

    1982-05-01

    variance-covariance matrix is generated y a subroutine named CORAN (UNIVAC, 1969). The jackknife variances are then punched on computer cards in the same...LEVEL OF: InMte CALL cORAN (oaILa.NSUR.NOAY.D,*OXflRRORR.PCOF.2K.1’)I WRITE IP97111 )1RRN.4 .1:NDAY) 0 a 3fill1UR I .’t UN 001f’..1uŔ:1 .w100710n

  16. Multifactor-Dimensionality Reduction Reveals High-Order Interactions among Estrogen-Metabolism Genes in Sporadic Breast Cancer

    PubMed Central

    Ritchie, Marylyn D.; Hahn, Lance W.; Roodi, Nady; Bailey, L. Renee; Dupont, William D.; Parl, Fritz F.; Moore, Jason H.

    2001-01-01

    One of the greatest challenges facing human geneticists is the identification and characterization of susceptibility genes for common complex multifactorial human diseases. This challenge is partly due to the limitations of parametric-statistical methods for detection of gene effects that are dependent solely or partially on interactions with other genes and with environmental exposures. We introduce multifactor-dimensionality reduction (MDR) as a method for reducing the dimensionality of multilocus information, to improve the identification of polymorphism combinations associated with disease risk. The MDR method is nonparametric (i.e., no hypothesis about the value of a statistical parameter is made), is model-free (i.e., it assumes no particular inheritance model), and is directly applicable to case-control and discordant-sib-pair studies. Using simulated case-control data, we demonstrate that MDR has reasonable power to identify interactions among two or more loci in relatively small samples. When it was applied to a sporadic breast cancer case-control data set, in the absence of any statistically significant independent main effects, MDR identified a statistically significant high-order interaction among four polymorphisms from three different estrogen-metabolism genes. To our knowledge, this is the first report of a four-locus interaction associated with a common complex multifactorial disease. PMID:11404819

  17. Multilevel Factorial Experiments for Developing Behavioral Interventions: Power, Sample Size, and Resource Considerations†

    PubMed Central

    Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.

    2012-01-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or employees within organizations). In this article we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements such as the number of clusters, the number of lower-level units, and the intraclass correlation affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes, because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. PMID:22309956

  18. Cardiovascular disease prevention and lifestyle interventions: effectiveness and efficacy.

    PubMed

    Haskell, William L

    2003-01-01

    Over the past half century scientific data support the strong relationship between the way a person or population lives and their risk for developing or dying from cardiovascular disease (CVD). While heredity can be a major factor for some people, their personal health habits and environmental/cultural exposure are more important factors. CVD is a multifactor process that is contributed to by a variety of biological and behavioral characteristics of the person including a number of well-established and emerging risk factors. Not smoking, being physically active, eating a heart healthy diet, staying reasonably lean, and avoiding major stress and depression are the major components of an effective CVD prevention program. For people at high risk of CVD, medications frequently need to be added to a healthy lifestyle to minimize their risk of a heart attack or stroke, particularly in persons with conditions such as hypertension, hypercholesterolemia, or hyperglycemia. Maintaining an effective CVD prevention program in technologically advanced societies cannot be achieved by many high-risk persons without effective and sustained support from a well-organized health care system. Nurse-provided or nurse-coordinated care management programs using an integrated or multifactor approach have been highly effective in reducing CVD morbidity and mortality of high-risk persons.

  19. Multifactor leadership styles and new exposure to workplace bullying: a six-month prospective study

    PubMed Central

    TSUNO, Kanami; KAWAKAMI, Norito

    2014-01-01

    This study investigated the prospective association between supervisor leadership styles and workplace bullying. Altogether 404 civil servants from a local government in Japan completed baseline and follow-up surveys. The leadership variables and exposure to bullying were measured by Multifactor Leadership Questionnaire and Negative Acts Questionnaire-Revised, respectively. The prevalence of workplace bullying was 14.8% at baseline and 15.1% at follow-up. Among respondents who did not experience bullying at baseline (n=216), those who worked under the supervisors as higher in passive laissez-faire leadership had a 4.3 times higher risk of new exposure to bullying. On the other hand, respondents whose supervisors with highly considerate of the individual had a 70% lower risk of new exposure to bullying. In the entire sample (n=317), passive laissez-faire leadership was significantly and positively associated, while charisma/inspiration, individual consideration, and contingent reward were negatively associated both after adjusting for demographic and occupational characteristics at baseline, life events during follow-up, and exposure to workplace bullying at baseline. Results indicated that passive laissez-faire and low individual consideration leadership style at baseline were strong predictors of new exposure to bullying and high individual consideration leadership of supervisors/managers could be a preventive factor against bullying. PMID:25382384

  20. Multifactor leadership styles and new exposure to workplace bullying: a six-month prospective study.

    PubMed

    Tsuno, Kanami; Kawakami, Norito

    2015-01-01

    This study investigated the prospective association between supervisor leadership styles and workplace bullying. Altogether 404 civil servants from a local government in Japan completed baseline and follow-up surveys. The leadership variables and exposure to bullying were measured by Multifactor Leadership Questionnaire and Negative Acts Questionnaire-Revised, respectively. The prevalence of workplace bullying was 14.8% at baseline and 15.1% at follow-up. Among respondents who did not experience bullying at baseline (n=216), those who worked under the supervisors as higher in passive laissez-faire leadership had a 4.3 times higher risk of new exposure to bullying. On the other hand, respondents whose supervisors with highly considerate of the individual had a 70% lower risk of new exposure to bullying. In the entire sample (n=317), passive laissez-faire leadership was significantly and positively associated, while charisma/inspiration, individual consideration, and contingent reward were negatively associated both after adjusting for demographic and occupational characteristics at baseline, life events during follow-up, and exposure to workplace bullying at baseline. Results indicated that passive laissez-faire and low individual consideration leadership style at baseline were strong predictors of new exposure to bullying and high individual consideration leadership of supervisors/managers could be a preventive factor against bullying.

  1. Application of an Instrumental and Computational Approach for Improving the Vibration Behavior of Structural Panels Using a Lightweight Multilayer Composite

    PubMed Central

    Sánchez, Alberto; García, Manuel; Sebastián, Miguel Angel; Camacho, Ana María

    2014-01-01

    This work presents a hybrid (experimental-computational) application for improving the vibration behavior of structural components using a lightweight multilayer composite. The vibration behavior of a flat steel plate has been improved by the gluing of a lightweight composite formed by a core of polyurethane foam and two paper mats placed on its faces. This composite enables the natural frequencies to be increased and the modal density of the plate to be reduced, moving about the natural frequencies of the plate out of excitation range, thereby improving the vibration behavior of the plate. A specific experimental model for measuring the Operating Deflection Shape (ODS) has been developed, which enables an evaluation of the goodness of the natural frequencies obtained with the computational model simulated by the finite element method (FEM). The model of composite + flat steel plate determined by FEM was used to conduct parametric study, and the most influential factors for 1st, 2nd and 3rd mode were identified using a multifactor analysis of variance (Multifactor-ANOVA). The presented results can be easily particularized for other cases, as it may be used in cycles of continuous improvement as well as in the product development at the material, piece, and complete-system levels. PMID:24618779

  2. Multilevel factorial experiments for developing behavioral interventions: power, sample size, and resource considerations.

    PubMed

    Dziak, John J; Nahum-Shani, Inbal; Collins, Linda M

    2012-06-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. (c) 2012 APA, all rights reserved

  3. Random scalar fields and hyperuniformity

    NASA Astrophysics Data System (ADS)

    Ma, Zheng; Torquato, Salvatore

    2017-06-01

    Disordered many-particle hyperuniform systems are exotic amorphous states of matter that lie between crystals and liquids. Hyperuniform systems have attracted recent attention because they are endowed with novel transport and optical properties. Recently, the hyperuniformity concept has been generalized to characterize two-phase media, scalar fields, and random vector fields. In this paper, we devise methods to explicitly construct hyperuniform scalar fields. Specifically, we analyze spatial patterns generated from Gaussian random fields, which have been used to model the microwave background radiation and heterogeneous materials, the Cahn-Hilliard equation for spinodal decomposition, and Swift-Hohenberg equations that have been used to model emergent pattern formation, including Rayleigh-Bénard convection. We show that the Gaussian random scalar fields can be constructed to be hyperuniform. We also numerically study the time evolution of spinodal decomposition patterns and demonstrate that they are hyperuniform in the scaling regime. Moreover, we find that labyrinth-like patterns generated by the Swift-Hohenberg equation are effectively hyperuniform. We show that thresholding (level-cutting) a hyperuniform Gaussian random field to produce a two-phase random medium tends to destroy the hyperuniformity of the progenitor scalar field. We then propose guidelines to achieve effectively hyperuniform two-phase media derived from thresholded non-Gaussian fields. Our investigation paves the way for new research directions to characterize the large-structure spatial patterns that arise in physics, chemistry, biology, and ecology. Moreover, our theoretical results are expected to guide experimentalists to synthesize new classes of hyperuniform materials with novel physical properties via coarsening processes and using state-of-the-art techniques, such as stereolithography and 3D printing.

  4. Simulating propagation of coherent light in random media using the Fredholm type integral equation

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2017-06-01

    Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.

  5. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  6. Continuous time random walk with local particle-particle interaction

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Jiang, Guancheng

    2018-05-01

    The continuous time random walk (CTRW) is often applied to the study of particle motion in disordered media. Yet most such applications do not allow for particle-particle (walker-walker) interaction. In this paper, we consider a CTRW with particle-particle interaction; however, for simplicity, we restrain the interaction to be local. The generalized Chapman-Kolmogorov equation is modified by introducing a perturbation function that fluctuates around 1, which models the effect of interaction. Subsequently, a time-fractional nonlinear advection-diffusion equation is derived from this walking system. Under the initial condition of condensed particles at the origin and the free-boundary condition, we numerically solve this equation with both attractive and repulsive particle-particle interactions. Moreover, a Monte Carlo simulation is devised to verify the results of the above numerical work. The equation and the simulation unanimously predict that this walking system converges to the conventional one in the long-time limit. However, for systems where the free-boundary condition and long-time limit are not simultaneously satisfied, this convergence does not hold.

  7. 4-wave dynamics in kinetic wave turbulence

    NASA Astrophysics Data System (ADS)

    Chibbaro, Sergio; Dematteis, Giovanni; Rondoni, Lamberto

    2018-01-01

    A general Hamiltonian wave system with quartic resonances is considered, in the standard kinetic limit of a continuum of weakly interacting dispersive waves with random phases. The evolution equation for the multimode characteristic function Z is obtained within an ;interaction representation; and a perturbation expansion in the small nonlinearity parameter. A frequency renormalization is performed to remove linear terms that do not appear in the 3-wave case. Feynman-Wyld diagrams are used to average over phases, leading to a first order differential evolution equation for Z. A hierarchy of equations, analogous to the Boltzmann hierarchy for low density gases is derived, which preserves in time the property of random phases and amplitudes. This amounts to a general formalism for both the N-mode and the 1-mode PDF equations for 4-wave turbulent systems, suitable for numerical simulations and for investigating intermittency. Some of the main results which are developed here in detail have been tested numerically in a recent work.

  8. A correlated Walks' theory for DNA denaturation

    NASA Astrophysics Data System (ADS)

    Mejdani, R.

    1994-08-01

    We have shown that by using a correlated Walks' theory for the lattice gas model on a one-dimensional lattice, we can study, beside the saturation curves obtained before for the enzyme kinetics, also the DNA denaturation process. In the limit of no interactions between sites the equation for melting curves of DNA reduces to the random model equation. Thus our leads naturally to this classical equation in the limiting case.

  9. An Operator Method for Field Moments from the Extended Parabolic Wave Equation and Analytical Solutions of the First and Second Moments for Atmospheric Electromagnetic Wave Propagation

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    2004-01-01

    The extended wide-angle parabolic wave equation applied to electromagnetic wave propagation in random media is considered. A general operator equation is derived which gives the statistical moments of an electric field of a propagating wave. This expression is used to obtain the first and second order moments of the wave field and solutions are found that transcend those which incorporate the full paraxial approximation at the outset. Although these equations can be applied to any propagation scenario that satisfies the conditions of application of the extended parabolic wave equation, the example of propagation through atmospheric turbulence is used. It is shown that in the case of atmospheric wave propagation and under the Markov approximation (i.e., the delta-correlation of the fluctuations in the direction of propagation), the usual parabolic equation in the paraxial approximation is accurate even at millimeter wavelengths. The comprehensive operator solution also allows one to obtain expressions for the longitudinal (generalized) second order moment. This is also considered and the solution for the atmospheric case is obtained and discussed. The methodology developed here can be applied to any qualifying situation involving random propagation through turbid or plasma environments that can be represented by a spectral density of permittivity fluctuations.

  10. A boundary PDE feedback control approach for the stabilization of mortgage price dynamics

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Sarno, D.

    2017-11-01

    Several transactions taking place in financial markets are dependent on the pricing of mortgages (loans for the purchase of residences, land or farms). In this article, a method for stabilization of mortgage price dynamics is developed. It is considered that mortgage prices follow a PDE model which is equivalent to a multi-asset Black-Scholes PDE. Actually it is a diffusion process evolving in a 2D assets space, where the first asset is the house price and the second asset is the interest rate. By applying semi-discretization and a finite differences scheme this multi-asset PDE is transformed into a state-space model consisting of ordinary nonlinear differential equations. For the local subsystems, into which the mortgage PDE is decomposed, it becomes possible to apply boundary-based feedback control. The controller design proceeds by showing that the state-space model of the mortgage price PDE stands for a differentially flat system. Next, for each subsystem which is related to a nonlinear ODE, a virtual control input is computed, that can invert the subsystem's dynamics and can eliminate the subsystem's tracking error. From the last row of the state-space description, the control input (boundary condition) that is actually applied to the multi-factor mortgage price PDE system is found. This control input contains recursively all virtual control inputs which were computed for the individual ODE subsystems associated with the previous rows of the state-space equation. Thus, by tracing the rows of the state-space model backwards, at each iteration of the control algorithm, one can finally obtain the control input that should be applied to the mortgage price PDE system so as to assure that all its state variables will converge to the desirable setpoints. By showing the feasibility of such a control method it is also proven that through selected modification of the PDE boundary conditions the price of the mortgage can be made to converge and stabilize at specific reference values.

  11. Nurse executive transformational leadership found in participative organizations.

    PubMed

    Dunham-Taylor, J

    2000-05-01

    The study examined a national sample of 396 randomly selected hospital nurse executives to explore transformational leadership, stage of power, and organizational climate. Results from a few nurse executive studies have found nurse executives were transformational leaders. As executives were more transformational, they achieved better staff satisfaction and higher work group effectiveness. This study integrates Bass' transformational leadership model with Hagberg's power stage theory and Likert's organizational climate theory. Nurse executives (396) and staff reporting to them (1,115) rated the nurse executives' leadership style, staff extra effort, staff satisfaction, and work group effectiveness using Bass and Avolio's Multifactor Leadership Questionnaire. Executives' bosses (360) rated executive work group effectiveness. Executives completed Hagberg's Personal Power Profile and ranked their organizational climate using Likert's Profile of Organizational Characteristics. Nurse executives used transformational leadership fairly often; achieved fairly satisfied staff levels; were very effective according to bosses; were most likely at stage 3 (power by achievement) or stage 4 (power by reflection); and rated their hospital as a Likert System 3 Consultative Organization. Staff satisfaction and work group effectiveness decreased as nurse executives were more transactional. Higher transformational scores tended to occur with higher educational degrees and within more participative organizations. Transformational qualities can be enhanced by further education, by achieving higher power stages, and by being within more participative organizations.

  12. Prioritization of candidate disease genes by combining topological similarity and semantic similarity.

    PubMed

    Liu, Bin; Jin, Min; Zeng, Pan

    2015-10-01

    The identification of gene-phenotype relationships is very important for the treatment of human diseases. Studies have shown that genes causing the same or similar phenotypes tend to interact with each other in a protein-protein interaction (PPI) network. Thus, many identification methods based on the PPI network model have achieved good results. However, in the PPI network, some interactions between the proteins encoded by candidate gene and the proteins encoded by known disease genes are very weak. Therefore, some studies have combined the PPI network with other genomic information and reported good predictive performances. However, we believe that the results could be further improved. In this paper, we propose a new method that uses the semantic similarity between the candidate gene and known disease genes to set the initial probability vector of a random walk with a restart algorithm in a human PPI network. The effectiveness of our method was demonstrated by leave-one-out cross-validation, and the experimental results indicated that our method outperformed other methods. Additionally, our method can predict new causative genes of multifactor diseases, including Parkinson's disease, breast cancer and obesity. The top predictions were good and consistent with the findings in the literature, which further illustrates the effectiveness of our method. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Predictive models of poly(ethylene-terephthalate) film degradation under multi-factor accelerated weathering exposures

    PubMed Central

    Ngendahimana, David K.; Fagerholm, Cara L.; Sun, Jiayang; Bruckman, Laura S.

    2017-01-01

    Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples’ responses, the change in haze (%) depended on individual samples’ responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction. PMID:28498875

  14. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  15. Reproduction of exact solutions of Lipkin model by nonlinear higher random-phase approximation

    NASA Astrophysics Data System (ADS)

    Terasaki, J.; Smetana, A.; Šimkovic, F.; Krivoruchenko, M. I.

    2017-10-01

    It is shown that the random-phase approximation (RPA) method with its nonlinear higher generalization, which was previously considered as approximation except for a very limited case, reproduces the exact solutions of the Lipkin model. The nonlinear higher RPA is based on an equation nonlinear on eigenvectors and includes many-particle-many-hole components in the creation operator of the excited states. We demonstrate the exact character of solutions analytically for the particle number N = 2 and numerically for N = 8. This finding indicates that the nonlinear higher RPA is equivalent to the exact Schrödinger equation.

  16. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Turbulent solutions of the equations of fluid motion

    NASA Technical Reports Server (NTRS)

    Deissler, R. G.

    1984-01-01

    Some turbulent solutions of the unaveraged Navier-Stokes equations (equations of fluid motion) are reviewed. Those equations are solved numerically in order to study the nonlinear physics of incompressible turbulent flow. Initial three-dimensional cosine velocity fluctuations and periodic boundary conditions are used in most of the work considered. The three components of the mean-square velocity fluctuations are initially equal for the conditions chosen. The resulting solutions show characteristics of turbulence such as the linear and nonlinear excitation of small-scale fluctuations. For the stronger fluctuations, the initially nonrandom flow develops into an apparently random turbulence. Thus randomness or turbulence can arise as a consequence of the structure of the Navier-Stokes equations. The cases considered include turbulence which is statistically homogeneous or inhomogeneous and isotropic or anisotropic. A mean shear is present in some cases. A statistically steady-state turbulence is obtained by using a spatially periodic body force. Various turbulence processes, including the transfer of energy between eddy sizes and between directional components, and the production, dissipation, and spatial diffusion of turbulence, are considered. It is concluded that the physical processes occurring in turbulence can be profitably studied numerically.

  18. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2014-03-04

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  19. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, J.; Ristorcelli, J. R.

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  20. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    PubMed

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  1. FITTING NONLINEAR ORDINARY DIFFERENTIAL EQUATION MODELS WITH RANDOM EFFECTS AND UNKNOWN INITIAL CONDITIONS USING THE STOCHASTIC APPROXIMATION EXPECTATION–MAXIMIZATION (SAEM) ALGORITHM

    PubMed Central

    Chow, Sy- Miin; Lu, Zhaohua; Zhu, Hongtu; Sherwood, Andrew

    2014-01-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation–maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456

  2. Actor-network Procedures: Modeling Multi-factor Authentication, Device Pairing, Social Interactions

    DTIC Science & Technology

    2011-08-29

    unmodifiable properties of your body; or the capabilities that you cannot convey to others, such as your handwriting . An identity can thus be determined by...network, two principals with the same set of secrets but, say , different computational powers, can be distinguished by timing their responses. Or they... says that configurations are finite sets. Partially ordered multisets, or pomsets were introduced and extensively studied by Vaughan Pratt and his

  3. A remote sensing-assisted risk rating study to predict oak decline and recovery in the Missouri Ozark Highlands, USA

    Treesearch

    Cuizhen Wang; Hong S. He; John M. Kabrick

    2008-01-01

    Forests in the Ozark Highlands underwent widespread oak decline affected by severe droughts in 1999-2000. In this study, the differential normalized difference water index was calculated to detect crown dieback. A multi-factor risk rating system was built to map risk levels of stands. As a quick response to drought, decline in 2000 mostly occurred in stands at low to...

  4. Local Influence Analysis of Nonlinear Structural Equation Models

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Tang, Nian-Sheng

    2004-01-01

    By regarding the latent random vectors as hypothetical missing data and based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm, we investigate assessment of local influence of various perturbation schemes in a nonlinear structural equation model. The basic building blocks of local influence analysis…

  5. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  6. Examining an Alternative to Score Equating: A Randomly Equivalent Forms Approach. Research Report. ETS RR-08-14

    ERIC Educational Resources Information Center

    Liao, Chi-Wen; Livingston, Samuel A.

    2008-01-01

    Randomly equivalent forms (REF) of tests in listening and reading for nonnative speakers of English were created by stratified random assignment of items to forms, stratifying on item content and predicted difficulty. The study included 50 replications of the procedure for each test. Each replication generated 2 REFs. The equivalence of those 2…

  7. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    NASA Astrophysics Data System (ADS)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  8. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    PubMed

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  9. Development of control systems for space shuttle vehicles. Volume 2: Appendixes

    NASA Technical Reports Server (NTRS)

    Stone, C. R.; Chase, T. W.; Kiziloz, B. M.; Ward, M. D.

    1971-01-01

    A launch phase random normal wind model is presented for delta wing, two-stage, space shuttle control system studies. Equations, data, and simulations for conventional launch studies are given as well as pitch and lateral equations and data for covariance analyses of the launch phase of MSFC vehicle B. Lateral equations and data for North American 130G and 134D are also included along with a high-altitude abort simulation.

  10. Fusion of Imaging and Inertial Sensors for Navigation

    DTIC Science & Technology

    2006-09-01

    combat operations. The Global Positioning System (GPS) was fielded in the 1980’s and first used for precision navigation and targeting in combat...equations [37]. Consider the homogeneous nonlinear differential equation ẋ(t) = f [x(t),u(t), t] ; x(t0) = x0 (2.4) For a given input function , u0(t...differential equation is a time-varying probability density function . The Kalman filter derivation assumes Gaussian distributions for all random

  11. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  12. Case management to reduce risk of cardiovascular disease in a county health care system.

    PubMed

    Ma, Jun; Berra, Kathy; Haskell, William L; Klieman, Linda; Hyde, Shauna; Smith, Mark W; Xiao, Lan; Stafford, Randall S

    2009-11-23

    Case management (CM) is a systematic approach to supplement physician-centered efforts to prevent cardiovascular disease (CVD). Research is limited on its implementation and efficacy in low-income, ethnic minority populations. We conducted a randomized clinical trial to evaluate a nurse- and dietitian-led CM program for reducing major CVD risk factors in low-income, primarily ethnic minority patients in a county health care system, 63.0% of whom had type 2 diabetes mellitus. The primary outcome was the Framingham risk score (FRS). A total of 419 patients at elevated risk of CVD events were randomized and followed up for a mean of 16 months (81.4% retention). The mean FRS was significantly lower for the CM vs usual care group at follow-up (7.80 [95% confidence interval, 7.21-8.38] vs 8.93 [8.36-9.49]; P = .001) after adjusting for baseline FRS. This is equivalent to 5 fewer heart disease events per 1000 individuals per year attributable to the intervention or to 200 individuals receiving the intervention to prevent 1 event per year. The pattern of group differences in the FRS was similar in subgroups defined a priori by sex and ethnicity. The main driver of these differences was lowering the mean (SD) systolic (-4.2 [18.5] vs 2.6 [22.7] mm Hg; P = .003) and diastolic (-6.0 [11.6] vs -3.0 [11.7] mm Hg; P = .02) blood pressures for the CM vs usual care group. Nurse and dietitian CM targeting multifactor risk reduction can lead to modest improvements in CVD risk factors among high-risk patients in low-income, ethnic minority populations receiving care in county health clinics. clinicaltrials.gov Identifier: NCT00128687.

  13. dLocAuth: a dynamic multifactor authentication scheme for mCommerce applications using independent location-based obfuscation

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan A.

    2012-06-01

    This paper proposes a new technique to obfuscate an authentication-challenge program (named LocProg) using randomly generated data together with a client's current location in real-time. LocProg can be used to enable any handsetapplication on mobile-devices (e.g. mCommerce on Smartphones) that requires authentication with a remote authenticator (e.g. bank). The motivation of this novel technique is to a) enhance the security against replay attacks, which is currently based on using real-time nonce(s), and b) add a new security factor, which is location verified by two independent sources, to challenge / response methods for authentication. To assure a secure-live transaction, thus reducing the possibility of replay and other remote attacks, the authors have devised a novel technique to obtain the client's location from two independent sources of GPS on the client's side and the cellular network on authenticator's side. The algorithm of LocProg is based on obfuscating "random elements plus a client's data" with a location-based key, generated on the bank side. LocProg is then sent to the client and is designed so it will automatically integrate into the target application on the client's handset. The client can then de-obfuscate LocProg if s/he is within a certain range around the location calculated by the bank and if the correct personal data is supplied. LocProg also has features to protect against trial/error attacks. Analysis of LocAuth's security (trust, threat and system models) and trials based on a prototype implementation (on Android platform) prove the viability and novelty of LocAuth.

  14. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2013-11-01

    Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.

  15. Evaluation of the National Research Council (2001) dairy model and derivation of new prediction equations. 1. Digestibility of fiber, fat, protein, and nonfiber carbohydrate.

    PubMed

    White, R R; Roman-Garcia, Y; Firkins, J L; VandeHaar, M J; Armentano, L E; Weiss, W P; McGill, T; Garnett, R; Hanigan, M D

    2017-05-01

    Evaluation of ration balancing systems such as the National Research Council (NRC) Nutrient Requirements series is important for improving predictions of animal nutrient requirements and advancing feeding strategies. This work used a literature data set (n = 550) to evaluate predictions of total-tract digested neutral detergent fiber (NDF), fatty acid (FA), crude protein (CP), and nonfiber carbohydrate (NFC) estimated by the NRC (2001) dairy model. Mean biases suggested that the NRC (2001) lactating cow model overestimated true FA and CP digestibility by 26 and 7%, respectively, and under-predicted NDF digestibility by 16%. All NRC (2001) estimates had notable mean and slope biases and large root mean squared prediction error (RMSPE), and concordance (CCC) ranged from poor to good. Predicting NDF digestibility with independent equations for legumes, corn silage, other forages, and nonforage feeds improved CCC (0.85 vs. 0.76) compared with the re-derived NRC (2001) equation form (NRC equation with parameter estimates re-derived against this data set). Separate FA digestion coefficients were derived for different fat supplements (animal fats, oils, and other fat types) and for the basal diet. This equation returned improved (from 0.76 to 0.94) CCC compared with the re-derived NRC (2001) equation form. Unique CP digestibility equations were derived for forages, animal protein feeds, plant protein feeds, and other feeds, which improved CCC compared with the re-derived NRC (2001) equation form (0.74 to 0.85). New NFC digestibility coefficients were derived for grain-specific starch digestibilities, with residual organic matter assumed to be 98% digestible. A Monte Carlo cross-validation was performed to evaluate repeatability of model fit. In this procedure, data were randomly subsetted 500 times into derivation (60%) and evaluation (40%) data sets, and equations were derived using the derivation data and then evaluated against the independent evaluation data. Models derived with random study effects demonstrated poor repeatability of fit in independent evaluation. Similar equations derived without random study effects showed improved fit against independent data and little evidence of biased parameter estimates associated with failure to include study effects. The equations derived in this analysis provide interesting insight into how NDF, starch, FA, and CP digestibilities are affected by intake, feed type, and diet composition. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  16. The Effect of Tutoring With Nonstandard Equations for Students With Mathematics Difficulty.

    PubMed

    Powell, Sarah R; Driver, Melissa K; Julian, Tyler E

    2015-01-01

    Students often misinterpret the equal sign (=) as operational instead of relational. Research indicates misinterpretation of the equal sign occurs because students receive relatively little exposure to equations that promote relational understanding of the equal sign. No study, however, has examined effects of nonstandard equations on the equation solving and equal-sign understanding of students with mathematics difficulty (MD). In the present study, second-grade students with MD (n = 51) were randomly assigned to standard equations tutoring, combined tutoring (standard and nonstandard equations), and no-tutoring control. Combined tutoring students demonstrated greater gains on equation-solving assessments and equal-sign tasks compared to the other two conditions. Standard tutoring students demonstrated improved skill on equation solving over control students, but combined tutoring students' performance gains were significantly larger. Results indicate that exposure to and practice with nonstandard equations positively influence student understanding of the equal sign. © Hammill Institute on Disabilities 2013.

  17. Diffusion in random networks

    DOE PAGES

    Zhang, Duan Z.; Padrino, Juan C.

    2017-06-01

    The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less

  18. Random vibration analysis of train-bridge under track irregularities and traveling seismic waves using train-slab track-bridge interaction model

    NASA Astrophysics Data System (ADS)

    Zeng, Zhi-Ping; Zhao, Yan-Gang; Xu, Wen-Tao; Yu, Zhi-Wu; Chen, Ling-Kun; Lou, Ping

    2015-04-01

    The frequent use of bridges in high-speed railway lines greatly increases the probability that trains are running on bridges when earthquakes occur. This paper investigates the random vibrations of a high-speed train traversing a slab track on a continuous girder bridge subjected to track irregularities and traveling seismic waves by the pseudo-excitation method (PEM). To derive the equations of motion of the train-slab track-bridge interaction system, the multibody dynamics and finite element method models are used for the train and the track and bridge, respectively. By assuming track irregularities to be fully coherent random excitations with time lags between different wheels and seismic accelerations to be uniformly modulated, non-stationary random excitations with time lags between different foundations, the random load vectors of the equations of motion are transformed into a series of deterministic pseudo-excitations based on PEM and the wheel-rail contact relationship. A computer code is developed to obtain the time-dependent random responses of the entire system. As a case study, the random vibration characteristics of an ICE-3 high-speed train traversing a seven-span continuous girder bridge simultaneously excited by track irregularities and traveling seismic waves are analyzed. The influence of train speed and seismic wave propagation velocity on the random vibration characteristics of the bridge and train are discussed.

  19. Propagation of finite amplitude sound through turbulence: Modeling with geometrical acoustics and the parabolic approximation

    NASA Astrophysics Data System (ADS)

    Blanc-Benon, Philippe; Lipkens, Bart; Dallois, Laurent; Hamilton, Mark F.; Blackstock, David T.

    2002-01-01

    Sonic boom propagation can be affected by atmospheric turbulence. It has been shown that turbulence affects the perceived loudness of sonic booms, mainly by changing its peak pressure and rise time. The models reported here describe the nonlinear propagation of sound through turbulence. Turbulence is modeled as a set of individual realizations of a random temperature or velocity field. In the first model, linear geometrical acoustics is used to trace rays through each realization of the turbulent field. A nonlinear transport equation is then derived along each eigenray connecting the source and receiver. The transport equation is solved by a Pestorius algorithm. In the second model, the KZK equation is modified to account for the effect of a random temperature field and it is then solved numerically. Results from numerical experiments that simulate the propagation of spark-produced N waves through turbulence are presented. It is observed that turbulence decreases, on average, the peak pressure of the N waves and increases the rise time. Nonlinear distortion is less when turbulence is present than without it. The effects of random vector fields are stronger than those of random temperature fields. The location of the caustics and the deformation of the wave front are also presented. These observations confirm the results from the model experiment in which spark-produced N waves are used to simulate sonic boom propagation through a turbulent atmosphere.

  20. Propagation of finite amplitude sound through turbulence: modeling with geometrical acoustics and the parabolic approximation.

    PubMed

    Blanc-Benon, Philippe; Lipkens, Bart; Dallois, Laurent; Hamilton, Mark F; Blackstock, David T

    2002-01-01

    Sonic boom propagation can be affected by atmospheric turbulence. It has been shown that turbulence affects the perceived loudness of sonic booms, mainly by changing its peak pressure and rise time. The models reported here describe the nonlinear propagation of sound through turbulence. Turbulence is modeled as a set of individual realizations of a random temperature or velocity field. In the first model, linear geometrical acoustics is used to trace rays through each realization of the turbulent field. A nonlinear transport equation is then derived along each eigenray connecting the source and receiver. The transport equation is solved by a Pestorius algorithm. In the second model, the KZK equation is modified to account for the effect of a random temperature field and it is then solved numerically. Results from numerical experiments that simulate the propagation of spark-produced N waves through turbulence are presented. It is observed that turbulence decreases, on average, the peak pressure of the N waves and increases the rise time. Nonlinear distortion is less when turbulence is present than without it. The effects of random vector fields are stronger than those of random temperature fields. The location of the caustics and the deformation of the wave front are also presented. These observations confirm the results from the model experiment in which spark-produced N waves are used to simulate sonic boom propagation through a turbulent atmosphere.

  1. The impact of high total cholesterol and high low-density lipoprotein on avascular necrosis of the femoral head in low-energy femoral neck fractures.

    PubMed

    Zeng, Xianshang; Zhan, Ke; Zhang, Lili; Zeng, Dan; Yu, Weiguang; Zhang, Xinchao; Zhao, Mingdong; Lai, Zhicheng; Chen, Runzhen

    2017-02-17

    Avascular necrosis of the femoral head (AVNFH) typically constitutes 5 to 15% of all complications of low-energy femoral neck fractures, and due to an increasingly ageing population and a rising prevalence of femoral neck fractures, the number of patients who develop AVNFH is increasing. However, there is no consensus regarding the relationship between blood lipid abnormalities and postoperative AVNFH. The purpose of this retrospective study was to investigate the relationship between blood lipid abnormalities and AVNFH following the femoral neck fracture operation among an elderly population. A retrospective, comparative study was performed at our institution. Between June 2005 and November 2009, 653 elderly patients (653 hips) with low-energy femoral neck fractures underwent closed reduction and internal fixation with cancellous screws (Smith and Nephew, Memphis, Tennessee). Follow-up occurred at 1, 6, 12, 18, 24, 30, and 36 months after surgery. Logistic multi-factor regression analysis was used to assess the risk factors of AVNFH and to determine the effect of blood lipid levels on AVNFH development. Inclusion and exclusion criteria were predetermined to focus on isolated freshly closed femoral neck fractures in the elderly population. The primary outcome was the blood lipid levels. The secondary outcome was the logistic multi-factor regression analysis. A total of 325 elderly patients with low-energy femoral neck fractures (AVNFH, n = 160; control, n = 165) were assessed. In the AVNFH group, the average TC, TG, LDL, and Apo-B values were 7.11 ± 3.16 mmol/L, 2.15 ± 0.89 mmol/L, 4.49 ± 1.38 mmol/L, and 79.69 ± 17.29 mg/dL, respectively; all of which were significantly higher than the values in the control group. Logistic multi-factor regression analysis showed that both TC and LDL were the independent factors influencing the postoperative AVNFH within femoral neck fractures. This evidence indicates that AVNFH was significantly associated with blood lipid abnormalities in elderly patients with low-energy femoral neck fractures. The findings of this pilot trial justify a larger study to determine whether the result is more generally applicable to a broader population.

  2. A continuous time random walk (CTRW) integro-differential equation with chemical interaction

    NASA Astrophysics Data System (ADS)

    Ben-Zvi, Rami; Nissan, Alon; Scher, Harvey; Berkowitz, Brian

    2018-01-01

    A nonlocal-in-time integro-differential equation is introduced that accounts for close coupling between transport and chemical reaction terms. The structure of the equation contains these terms in a single convolution with a memory function M ( t), which includes the source of non-Fickian (anomalous) behavior, within the framework of a continuous time random walk (CTRW). The interaction is non-linear and second-order, relevant for a bimolecular reaction A + B → C. The interaction term ΓP A ( s, t) P B ( s, t) is symmetric in the concentrations of A and B (i.e. P A and P B ); thus the source terms in the equations for A, B and C are similar, but with a change in sign for that of C. Here, the chemical rate coefficient, Γ, is constant. The fully coupled equations are solved numerically using a finite element method (FEM) with a judicious representation of M ( t) that eschews the need for the entire time history, instead using only values at the former time step. To begin to validate the equations, the FEM solution is compared, in lieu of experimental data, to a particle tracking method (CTRW-PT); the results from the two approaches, particularly for the C profiles, are in agreement. The FEM solution, for a range of initial and boundary conditions, can provide a good model for reactive transport in disordered media.

  3. Random walks with random velocities.

    PubMed

    Zaburdaev, Vasily; Schmiedeberg, Michael; Stark, Holger

    2008-07-01

    We consider a random walk model that takes into account the velocity distribution of random walkers. Random motion with alternating velocities is inherent to various physical and biological systems. Moreover, the velocity distribution is often the first characteristic that is experimentally accessible. Here, we derive transport equations describing the dispersal process in the model and solve them analytically. The asymptotic properties of solutions are presented in the form of a phase diagram that shows all possible scaling regimes, including superdiffusive, ballistic, and superballistic motion. The theoretical results of this work are in excellent agreement with accompanying numerical simulations.

  4. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  5. Using Kernel Equating to Assess Item Order Effects on Test Scores

    ERIC Educational Resources Information Center

    Moses, Tim; Yang, Wen-Ling; Wilson, Christine

    2007-01-01

    This study explored the use of kernel equating for integrating and extending two procedures proposed for assessing item order effects in test forms that have been administered to randomly equivalent groups. When these procedures are used together, they can provide complementary information about the extent to which item order effects impact test…

  6. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    PubMed Central

    Chevalier, Michael W.; El-Samad, Hana

    2014-01-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled. PMID:25481130

  7. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    NASA Astrophysics Data System (ADS)

    Chevalier, Michael W.; El-Samad, Hana

    2014-12-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.

  8. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  9. Artificial Intelligence Procedures for Tree Taper Estimation within a Complex Vegetation Mosaic in Brazil

    PubMed Central

    Nunes, Matheus Henrique

    2016-01-01

    Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects. PMID:27187074

  10. Real time visualization of quantum walk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyazaki, Akihide; Hamada, Shinji; Sekino, Hideo

    2014-02-20

    Time evolution of quantum particles like electrons is described by time-dependent Schrödinger equation (TDSE). The TDSE is regarded as the diffusion equation of electrons with imaginary diffusion coefficients. And the TDSE is solved by quantum walk (QW) which is regarded as a quantum version of a classical random walk. The diffusion equation is solved in discretized space/time as in the case of classical random walk with additional unitary transformation of internal degree of freedom typical for quantum particles. We call the QW for solution of the TDSE a Schrödinger walk (SW). For observation of one quantum particle evolution under amore » given potential in atto-second scale, we attempt a successive computation and visualization of the SW. Using Pure Data programming, we observe the correct behavior of a probability distribution under the given potential in real time for observers of atto-second scale.« less

  11. Modeling of Electromagnetic Scattering by Discrete and Discretely Heterogeneous Random Media by Using Numerically Exact Solutions of the Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2017-01-01

    In this paper, we discuss some aspects of numerical modeling of electromagnetic scattering by discrete random medium by using numerically exact solutions of the macroscopic Maxwell equations. Typical examples of such media are clouds of interstellar dust, clouds of interplanetary dust in the Solar system, dusty atmospheres of comets, particulate planetary rings, clouds in planetary atmospheres, aerosol particles with numerous inclusions and so on. Our study is based on the results of extensive computations of different characteristics of electromagnetic scattering obtained by using the superposition T-matrix method which represents a direct computer solver of the macroscopic Maxwell equations for an arbitrary multisphere configuration. As a result, in particular, we clarify the range of applicability of the low-density theories of radiative transfer and coherent backscattering as well as of widely used effective-medium approximations.

  12. Anomalous transport in fluid field with random waiting time depending on the preceding jump length

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Li, Guo-Hua

    2016-11-01

    Anomalous (or non-Fickian) transport behaviors of particles have been widely observed in complex porous media. To capture the energy-dependent characteristics of non-Fickian transport of a particle in flow fields, in the present paper a generalized continuous time random walk model whose waiting time probability distribution depends on the preceding jump length is introduced, and the corresponding master equation in Fourier-Laplace space for the distribution of particles is derived. As examples, two generalized advection-dispersion equations for Gaussian distribution and lévy flight with the probability density function of waiting time being quadratic dependent on the preceding jump length are obtained by applying the derived master equation. Project supported by the Foundation for Young Key Teachers of Chengdu University of Technology, China (Grant No. KYGG201414) and the Opening Foundation of Geomathematics Key Laboratory of Sichuan Province, China (Grant No. scsxdz2013009).

  13. Artificial Intelligence Procedures for Tree Taper Estimation within a Complex Vegetation Mosaic in Brazil.

    PubMed

    Nunes, Matheus Henrique; Görgens, Eric Bastos

    2016-01-01

    Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.

  14. The Shape of a Ponytail and the Statistical Physics of Hair Fiber Bundles

    NASA Astrophysics Data System (ADS)

    Goldstein, Raymond E.; Warren, Patrick B.; Ball, Robin C.

    2012-02-01

    From Leonardo to the Brothers Grimm our fascination with hair has endured in art and science. Yet, a quantitative understanding of the shapes of a hair bundles has been lacking. Here we combine experiment and theory to propose an answer to the most basic question: What is the shape of a ponytail? A model for the shape of hair bundles is developed from the perspective of statistical physics, treating individual fibers as elastic filaments with random intrinsic curvatures. The combined effects of bending elasticity, gravity, and bundle compressibility are recast as a differential equation for the envelope of a bundle, in which the compressibility enters through an ``equation of state.'' From this, we identify the balance of forces in various regions of the ponytail, extract the equation of state from analysis of ponytail shapes, and relate the observed pressure to the measured random curvatures of individual hairs.

  15. An Economical Multifactor within-Subject Design Robust against Trend and Carryover Effects.

    DTIC Science & Technology

    1985-10-17

    ORGANIZATION REPORT NUMBER (S) S. MONIT ,,M.,,...---. 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION Essex...Road Orlando, FL 32813 Orlando, FL 32803 Ba. NAME OF FUNDING/SPONSORING " Sb. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ...ORGANIZATION (If applicable) S6~1332- &/. 0.-/195𔃺 Sc. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT ELEMENT

  16. A multi-factor designation method for mapping particulate-pollution control zones in China.

    PubMed

    Qin, Y; Xie, S D

    2011-09-01

    A multi-factor designation method for mapping particulate-pollution control zones was brought out through synthetically considering PM(10) pollution status, PM(10) anthropogenic emissions, fine particle pollution, long-range transport and economic situation. According to this method, China was divided into four different particulate-pollution control regions: PM Suspended Control Region, PM(10) Pollution Control Region, PM(2.5) Pollution Control Region and PM(10) and PM(2.5) Common Control Region, which accounted for 69.55%, 9.66%, 4.67% and 16.13% of China's territory, respectively. The PM(10) and PM(2.5) Common Control Region was mainly distributed in Bohai Region, Yangtze River Delta, Pearl River Delta, eastern of Sichuan province and Chongqing municipality, calling for immediate control of both PM(10) and PM(2.5). Cost-effective control effects can be achieved through concentrating efforts on PM(10) and PM(2.5) Common Control Region to address 60.32% of national PM(10) anthropogenic emissions. Air quality in districts belonging to PM(2.5) Pollution Control Region suggested that Chinese national ambient air quality standard for PM(10) was not strict enough. The result derived from application to China proved that this approach was feasible for mapping pollution control regions for a country with vast territory, complicated pollution characteristics and limited available monitoring data. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Can elevated CO2 modify regeneration from seed banks of floating freshwater marshes subjected to rising sea-level?

    USGS Publications Warehouse

    Middleton, Beth A.; McKee, Karen L.

    2012-01-01

    Higher atmospheric concentrations of CO2 can offset the negative effects of flooding or salinity on plant species, but previous studies have focused on mature, rather than regenerating vegetation. This study examined how interacting environments of CO2, water regime, and salinity affect seed germination and seedling biomass of floating freshwater marshes in the Mississippi River Delta, which are dominated by C3 grasses, sedges, and forbs. Germination density and seedling growth of the dominant species depended on multifactor interactions of CO2 (385 and 720 μl l-1) with flooding (drained, +8-cm depth, +8-cm depth-gradual) and salinity (0, 6% seawater) levels. Of the three factors tested, salinity was the most important determinant of seedling response patterns. Species richness (total = 19) was insensitive to CO2. Our findings suggest that for freshwater marsh communities, seedling response to CO2 is species-specific and secondary to salinity and flooding effects. Elevated CO2 did not ameliorate flooding or salinity stress. Consequently, climate-related changes in sea level or human-caused alterations in hydrology may override atmospheric CO2 concentrations in driving shifts in this plant community. The results of this study suggest caution in making extrapolations from species-specific responses to community-level predictions without detailed attention to the nuances of multifactor responses.

  18. An assessment of two-step linear regression and a multifactor probit analysis as alternatives to acute to chronic ratios in the estimation of chronic response from acute toxicity data to derive water quality guidelines.

    PubMed

    Slaughter, Andrew R; Palmer, Carolyn G; Muller, Wilhelmine J

    2007-04-01

    In aquatic ecotoxicology, acute to chronic ratios (ACRs) are often used to predict chronic responses from available acute data to derive water quality guidelines, despite many problems associated with this method. This paper explores the comparative protectiveness and accuracy of predicted guideline values derived from the ACR, linear regression analysis (LRA), and multifactor probit analysis (MPA) extrapolation methods applied to acute toxicity data for aquatic macroinvertebrates. Although the authors of the LRA and MPA methods advocate the use of extrapolated lethal effects in the 0.01% to 10% lethal concentration (LC0.01-LC10) range to predict safe chronic exposure levels to toxicants, the use of an extrapolated LC50 value divided by a safety factor of 5 was in addition explored here because of higher statistical confidence surrounding the LC50 value. The LRA LC50/5 method was found to compare most favorably with available experimental chronic toxicity data and was therefore most likely to be sufficiently protective, although further validation with the use of additional species is needed. Values derived by the ACR method were the least protective. It is suggested that there is an argument for the replacement of ACRs in developing water quality guidelines by the LRA LC50/5 method.

  19. Soil moisture surpasses elevated CO2 and temperature as a control on soil carbon dynamics in a multi-factor climate change experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garten Jr, Charles T; Classen, Aimee T; Norby, Richard J

    2009-01-01

    Some single-factor experiments suggest that elevated CO2 concentrations can increase soil carbon, but few experiments have examined the effects of interacting environmental factors on soil carbon dynamics. We undertook studies of soil carbon and nitrogen in a multi-factor (CO2 x temperature x soil moisture) climate change experiment on a constructed old-field ecosystem. After four growing seasons, elevated CO2 had no measurable effect on carbon and nitrogen concentrations in whole soil, particulate organic matter (POM), and mineral-associated organic matter (MOM). Analysis of stable carbon isotopes, under elevated CO2, indicated between 14 and 19% new soil carbon under two different watering treatmentsmore » with as much as 48% new carbon in POM. Despite significant belowground inputs of new organic matter, soil carbon concentrations and stocks in POM declined over four years under soil moisture conditions that corresponded to prevailing precipitation inputs (1,300 mm yr-1). Changes over time in soil carbon and nitrogen under a drought treatment (approximately 20% lower soil water content) were not statistically significant. Reduced soil moisture lowered soil CO2 efflux and slowed soil carbon cycling in the POM pool. In this experiment, soil moisture (produced by different watering treatments) was more important than elevated CO2 and temperature as a control on soil carbon dynamics.« less

  20. Derivation and computation of discrete-delay and continuous-delay SDEs in mathematical biology.

    PubMed

    Allen, Edward J

    2014-06-01

    Stochastic versions of several discrete-delay and continuous-delay differential equations, useful in mathematical biology, are derived from basic principles carefully taking into account the demographic, environmental, or physiological randomness in the dynamic processes. In particular, stochastic delay differential equation (SDDE) models are derived and studied for Nicholson's blowflies equation, Hutchinson's equation, an SIS epidemic model with delay, bacteria/phage dynamics, and glucose/insulin levels. Computational methods for approximating the SDDE models are described. Comparisons between computational solutions of the SDDEs and independently formulated Monte Carlo calculations support the accuracy of the derivations and of the computational methods.

  1. Forced precession of the cometary nucleus with randomly placed active regions

    NASA Technical Reports Server (NTRS)

    Szutowicz, Slawomira

    1992-01-01

    The cometary nucleus is assumed to be triaxial or axisymmetric spheroid rotating about its axis of maximum moment of inertia and is forced to precess due to jets of ejected material. Randomly placed regions of exposed ice on the surface of the nucleus are assumed to produce gas and dust. The solution of the heat conduction equation for each active region is used to find the gas sublimation rate and the jet acceleration. Precession of the comet nucleus is followed numerically using a phase-averaged system of equations. The gas production curves and the variation of the spin axis during the orbital motion of the comet are presented.

  2. Variational Solutions and Random Dynamical Systems to SPDEs Perturbed by Fractional Gaussian Noise

    PubMed Central

    Zeng, Caibin; Yang, Qigui; Cao, Junfei

    2014-01-01

    This paper deals with the following type of stochastic partial differential equations (SPDEs) perturbed by an infinite dimensional fractional Brownian motion with a suitable volatility coefficient Φ: dX(t) = A(X(t))dt+Φ(t)dB H(t), where A is a nonlinear operator satisfying some monotonicity conditions. Using the variational approach, we prove the existence and uniqueness of variational solutions to such system. Moreover, we prove that this variational solution generates a random dynamical system. The main results are applied to a general type of nonlinear SPDEs and the stochastic generalized p-Laplacian equation. PMID:24574903

  3. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  4. Scattering by a slab containing randomly located cylinders: comparison between radiative transfer and electromagnetic simulation.

    PubMed

    Roux, L; Mareschal, P; Vukadinovic, N; Thibaud, J B; Greffet, J J

    2001-02-01

    This study is devoted to the examination of scattering of waves by a slab containing randomly located cylinders. For the first time to our knowledge, the complete transmission problem has been solved numerically. We have compared the radiative transfer theory with a numerical solution of the wave equation. We discuss the coherent effects, such as forward-scattering dip and backscattering enhancement. It is seen that the radiative transfer equation can be used with great accuracy even for optically thin systems whose geometric thickness is comparable with the wavelength. We have also shown the presence of dependent scattering.

  5. Theory and modeling of atmospheric turbulence, part 2

    NASA Technical Reports Server (NTRS)

    Chen, C. M.

    1984-01-01

    Two dimensional geostrophic turbulence driven by a random force is investigated. Based on the Liouville equation, which simulates the primitive hydrodynamical equations, a group-kinetic theory of turbulence is developed and the kinetic equation of the scaled singlet distribution is derived. The kinetic equation is transformed into an equation of spectral balance in the equilibrium and non-equilibrium states. Comparison is made between the propagators and the Green's functions in the case of the non-asymptotic quasi-linear equation to prove the equivalence of both kinds of approximations used to describe perturbed trajectories of plasma turbulence. The microdynamical state of fluid turbulence is described by a hydrodynamical system and transformed into a master equation analogous to the Vlasov equation for plasma turbulence. The spectral balance for the velocity fluctuations of individual components shows that the scaled pressure strain correlation and the cascade transfer are two transport functions that play the most important roles.

  6. From stochastic processes to numerical methods: A new scheme for solving reaction subdiffusion fractional partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au

    We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less

  7. Symmetry breaking and uniqueness for the incompressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dascaliuc, Radu; Thomann, Enrique; Waymire, Edward C., E-mail: waymire@math.oregonstate.edu

    2015-07-15

    The present article establishes connections between the structure of the deterministic Navier-Stokes equations and the structure of (similarity) equations that govern self-similar solutions as expected values of certain naturally associated stochastic cascades. A principle result is that explosion criteria for the stochastic cascades involved in the probabilistic representations of solutions to the respective equations coincide. While the uniqueness problem itself remains unresolved, these connections provide interesting problems and possible methods for investigating symmetry breaking and the uniqueness problem for Navier-Stokes equations. In particular, new branching Markov chains, including a dilogarithmic branching random walk on the multiplicative group (0, ∞), naturallymore » arise as a result of this investigation.« less

  8. Symmetry breaking and uniqueness for the incompressible Navier-Stokes equations.

    PubMed

    Dascaliuc, Radu; Michalowski, Nicholas; Thomann, Enrique; Waymire, Edward C

    2015-07-01

    The present article establishes connections between the structure of the deterministic Navier-Stokes equations and the structure of (similarity) equations that govern self-similar solutions as expected values of certain naturally associated stochastic cascades. A principle result is that explosion criteria for the stochastic cascades involved in the probabilistic representations of solutions to the respective equations coincide. While the uniqueness problem itself remains unresolved, these connections provide interesting problems and possible methods for investigating symmetry breaking and the uniqueness problem for Navier-Stokes equations. In particular, new branching Markov chains, including a dilogarithmic branching random walk on the multiplicative group (0, ∞), naturally arise as a result of this investigation.

  9. Introduction to Physical Intelligence

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2011-01-01

    A slight deviation from Newtonian dynamics can lead to new effects associated with the concept of physical intelligence. Non-Newtonian effects such as deviation from classical thermodynamic as well as quantum-like properties have been analyzed. A self-supervised (intelligent) particle that can escape from Brownian motion autonomously is introduced. Such a capability is due to a coupling of the particle governing equation with its own Liouville equation via an appropriate feedback. As a result, the governing equation is self-stabilized, and random oscillations are suppressed, while the Liouville equation takes the form of the Fokker-Planck equation with negative diffusion. Non- Newtonian properties of such a dynamical system as well as thermodynamical implications have been evaluated.

  10. The Andersen aerobic fitness test: New peak oxygen consumption prediction equations in 10 and 16-year olds.

    PubMed

    Aadland, E; Andersen, L B; Lerum, Ø; Resaland, G K

    2018-03-01

    Measurement of aerobic fitness by determining peak oxygen consumption (VO 2peak ) is often not feasible in children and adolescents, thus field tests such as the Andersen test are required in many settings, for example in most school-based studies. This study provides cross-validated prediction equations for VO 2peak based on the Andersen test in 10 and 16-year-old children. We included 235 children (n = 113 10-year olds and 122 16-year olds) who performed the Andersen test and a progressive treadmill test to exhaustion to determine VO 2peak . Joint and sex-specific prediction equations were derived and tested in 20 random samples. Performance in terms of systematic (bias) and random error (limits of agreement) was evaluated by means of Bland-Altman plots. Bias varied from -4.28 to 5.25 mL/kg/min across testing datasets, sex, and the 2 age groups. Sex-specific equations (mean bias -0.42 to 0.16 mL/kg/min) performed somewhat better than joint equations (-1.07 to 0.84 mL/kg/min). Limits of agreement were substantial across all datasets, sex, and both age groups, but were slightly lower in 16-year olds (5.84-13.29 mL/kg/min) compared to 10-year olds (9.60-15.15 mL/kg/min). We suggest the presented equations can be used to predict VO 2peak from the Andersen test performance in children and adolescents on a group level. Although the Andersen test appears to be a good measure of aerobic fitness, researchers should interpret cross-sectional individual-level predictions of VO 2peak with caution due to large random measurement errors. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Optimal Control for Stochastic Delay Evolution Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Qingxin, E-mail: mqx@hutc.zj.cn; Shen, Yang, E-mail: skyshen87@gmail.com

    2016-08-15

    In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we applymore » stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.« less

  12. Influence of the random walk finite step on the first-passage probability

    NASA Astrophysics Data System (ADS)

    Klimenkova, Olga; Menshutin, Anton; Shchur, Lev

    2018-01-01

    A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.

  13. Operating System For Numerically Controlled Milling Machine

    NASA Technical Reports Server (NTRS)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  14. Sur les processus linéaires de naissance et de mort sous-critiques dans un environnement aléatoire.

    PubMed

    Bacaër, Nicolas

    2017-07-01

    An explicit formula is found for the rate of extinction of subcritical linear birth-and-death processes in a random environment. The formula is illustrated by numerical computations of the eigenvalue with largest real part of the truncated matrix for the master equation. The generating function of the corresponding eigenvector satisfies a Fuchsian system of singular differential equations. A particular attention is set on the case of two environments, which leads to Riemann's differential equation.

  15. Stationary Random Metrics on Hierarchical Graphs Via {(min,+)}-type Recursive Distributional Equations

    NASA Astrophysics Data System (ADS)

    Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele

    2016-07-01

    This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.

  16. Jackknifing Techniques for Evaluation of Equating Accuracy. Research Report. ETS RR-09-39

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Lee, Yi-Hsuan; Qian, Jiahe

    2009-01-01

    Grouped jackknifing may be used to evaluate the stability of equating procedures with respect to sampling error and with respect to changes in anchor selection. Properties of grouped jackknifing are reviewed for simple-random and stratified sampling, and its use is described for comparisons of anchor sets. Application is made to examples of item…

  17. Development of Multiple Regression Equations To Predict Fourth Graders' Achievement in Reading and Selected Content Areas.

    ERIC Educational Resources Information Center

    Hafner, Lawrence E.

    A study developed a multiple regression prediction equation for each of six selected achievement variables in a popular standardized test of achievement. Subjects, 42 fourth-grade pupils randomly selected across several classes in a large elementary school in a north Florida city, were administered several standardized tests to determine predictor…

  18. Structural Equation Modeling (SEM) for Satisfaction and Dissatisfaction Ratings; Multiple Group Invariance Analysis across Scales with Different Response Format

    ERIC Educational Resources Information Center

    Mazaheri, Mehrdad; Theuns, Peter

    2009-01-01

    The current study evaluates three hypothesized models on subjective well-being, comprising life domain ratings (LDR), overall satisfaction with life (OSWL), and overall dissatisfaction with life (ODWL), using structural equation modeling (SEM). A sample of 1,310 volunteering students, randomly assigned to six conditions, rated their overall life…

  19. Section Preequating under the Equivalent Groups Design without IRT

    ERIC Educational Resources Information Center

    Guo, Hongwen; Puhan, Gautam

    2014-01-01

    In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…

  20. Pattern formations and optimal packing.

    PubMed

    Mityushev, Vladimir

    2016-04-01

    Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Multifactor Determinants of Visual Accommodation as a Critical Intervening Variable in the Perception of Size and Distance: Phase I Report

    DTIC Science & Technology

    1997-11-01

    Expanded subset of the illustration to clarify the locus of the off-axis end point of retinal stimulation for correct accommodation. 55 Figure...12c. Expanded illustration to clarify the locus of the off -axis end point of retinal stimulation for myopic accommodation. 55 Figure 12d...Expanded illustration to clarify the locus of the off -axis end point of retinal stimulation for hyperopic accommodation. 56 Figure 13. Simplified

  3. Multi-factor Effects on the Durability of Recycle Aggregate Concrete

    NASA Astrophysics Data System (ADS)

    Ma, Huan; Cui, Yu-Li; Zhu, Wen-Yu; Xie, Xian-Jie

    2016-05-01

    Recycled Aggregate Concrete (RAC) was prepared with different recycled aggregate replacement ratio, 0, 30%, 70% and 100% respectively. The performances of RAC were examined by the freeze-thaw cycle, carbonization and sulfate attack to assess the durability. Results show that test sequence has different effects on the durability of RAC; the durability is poorer when carbonation experiment was carried out firstly, and then other experiment was carried out again; the durability is better when recycled aggregate replacement ratio is 70%.

  4. Multi-factor Analysis of Pre-control Fracture Simulations about Projectile Material

    NASA Astrophysics Data System (ADS)

    Wan, Ren-Yi; Zhou, Wei

    2016-05-01

    The study of projectile material pre-control fracture is helpful to improve the projectile metal effective fragmentation and the material utilization rate. Fragments muzzle velocity and lethality can be affected by the different explosive charge and the way of initiation. The finite element software can simulate the process of projectile explosive rupture which has a pre-groove in the projectile shell surface and analysis of typical node velocity change with time, to provides a reference for the design and optimization of precontrol frag.

  5. PKPass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamson, Ryan M.

    Password management solutions exist, but few are designed for enterprise systems administrators sharing oncall rotations. Due to the Multi-Factor Level of Assurance 4 effort, DOE is now distributing PIV cards with cryptographically signed certificate and private key pairs to administrators and other security-significant users. We utilize this public key infrastructure (PKI) to encrypt passwords for other recipients in a secure way. This is cross platform (works on OSX and Linux systems), and has already been adopted internally by the NCCS systems administration staff to replace their old password book system.

  6. Scaling Laws for the Multidimensional Burgers Equation with Quadratic External Potential

    NASA Astrophysics Data System (ADS)

    Leonenko, N. N.; Ruiz-Medina, M. D.

    2006-07-01

    The reordering of the multidimensional exponential quadratic operator in coordinate-momentum space (see X. Wang, C.H. Oh and L.C. Kwek (1998). J. Phys. A.: Math. Gen. 31:4329-4336) is applied to derive an explicit formulation of the solution to the multidimensional heat equation with quadratic external potential and random initial conditions. The solution to the multidimensional Burgers equation with quadratic external potential under Gaussian strongly dependent scenarios is also obtained via the Hopf-Cole transformation. The limiting distributions of scaling solutions to the multidimensional heat and Burgers equations with quadratic external potential are then obtained under such scenarios.

  7. On the theory of Brownian motion with the Alder-Wainwright effect

    NASA Astrophysics Data System (ADS)

    Okabe, Yasunori

    1986-12-01

    The Stokes-Boussinesq-Langevin equation, which describes the time evolution of Brownian motion with the Alder-Wainwright effect, can be treated in the framework of the theory of KMO-Langevin equations which describe the time evolution of a real, stationary Gaussian process with T-positivity (reflection positivity) originating in axiomatic quantum field theory. After proving the fluctuation-dissipation theorems for KMO-Langevin equations, we obtain an explicit formula for the deviation from the classical Einstein relation that occurs in the Stokes-Boussinesq-Langevin equation with a white noise as its random force. We are interested in whether or not it can be measured experimentally.

  8. Generalized fractional diffusion equations for subdiffusion in arbitrarily growing domains

    NASA Astrophysics Data System (ADS)

    Angstmann, C. N.; Henry, B. I.; McGann, A. V.

    2017-10-01

    The ubiquity of subdiffusive transport in physical and biological systems has led to intensive efforts to provide robust theoretical models for this phenomena. These models often involve fractional derivatives. The important physical extension of this work to processes occurring in growing materials has proven highly nontrivial. Here we derive evolution equations for modeling subdiffusive transport in a growing medium. The derivation is based on a continuous-time random walk. The concise formulation of these evolution equations requires the introduction of a new, comoving, fractional derivative. The implementation of the evolution equation is illustrated with a simple model of subdiffusing proteins in a growing membrane.

  9. Fokker-Planck equation for the non-Markovian Brownian motion in the presence of a magnetic field

    NASA Astrophysics Data System (ADS)

    Das, Joydip; Mondal, Shrabani; Bag, Bidhan Chandra

    2017-10-01

    In the present study, we have proposed the Fokker-Planck equation in a simple way for a Langevin equation of motion having ordinary derivative (OD), the Gaussian random force and a generalized frictional memory kernel. The equation may be associated with or without conservative force field from harmonic potential. We extend this method for a charged Brownian particle in the presence of a magnetic field. Thus, the present method is applicable for a Langevin equation of motion with OD, the Gaussian colored thermal noise and any kind of linear force field that may be conservative or not. It is also simple to apply this method for the colored Gaussian noise that is not related to the damping strength.

  10. Fokker-Planck equation for the non-Markovian Brownian motion in the presence of a magnetic field.

    PubMed

    Das, Joydip; Mondal, Shrabani; Bag, Bidhan Chandra

    2017-10-28

    In the present study, we have proposed the Fokker-Planck equation in a simple way for a Langevin equation of motion having ordinary derivative (OD), the Gaussian random force and a generalized frictional memory kernel. The equation may be associated with or without conservative force field from harmonic potential. We extend this method for a charged Brownian particle in the presence of a magnetic field. Thus, the present method is applicable for a Langevin equation of motion with OD, the Gaussian colored thermal noise and any kind of linear force field that may be conservative or not. It is also simple to apply this method for the colored Gaussian noise that is not related to the damping strength.

  11. Telegraph noise in Markovian master equation for electron transport through molecular junctions

    NASA Astrophysics Data System (ADS)

    Kosov, Daniel S.

    2018-05-01

    We present a theoretical approach to solve the Markovian master equation for quantum transport with stochastic telegraph noise. Considering probabilities as functionals of a random telegraph process, we use Novikov's functional method to convert the stochastic master equation to a set of deterministic differential equations. The equations are then solved in the Laplace space, and the expression for the probability vector averaged over the ensemble of realisations of the stochastic process is obtained. We apply the theory to study the manifestations of telegraph noise in the transport properties of molecular junctions. We consider the quantum electron transport in a resonant-level molecule as well as polaronic regime transport in a molecular junction with electron-vibration interaction.

  12. On the existence, uniqueness, and asymptotic normality of a consistent solution of the likelihood equations for nonidentically distributed observations: Applications to missing data problems

    NASA Technical Reports Server (NTRS)

    Peters, C. (Principal Investigator)

    1980-01-01

    A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.

  13. On-line failure detection and damping measurement of aerospace structures by random decrement signatures

    NASA Technical Reports Server (NTRS)

    Cole, H. A., Jr.

    1973-01-01

    Random decrement signatures of structures vibrating in a random environment are studied through use of computer-generated and experimental data. Statistical properties obtained indicate that these signatures are stable in form and scale and hence, should have wide application in one-line failure detection and damping measurement. On-line procedures are described and equations for estimating record-length requirements to obtain signatures of a prescribed precision are given.

  14. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  15. Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1992-01-01

    Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.

  16. Two Models of Time Constrained Target Travel between Two Endpoints Constructed by the Application of Brownian Motion and a Random Tour.

    DTIC Science & Technology

    1983-03-01

    the Naval Postgraduate School. As my *advisor, Prof. Gaver suggested and derived the Brownian bridge, as well as nudged me in the right direction when...the * random tour process by deriving the mean square radial distance for a random tour with arbitrary course change distribution to be: EECR I2(V / 2...random tour model, li = Iy = 8, and equation (3)x y results as expected. The notion of an arbitrary course change distribution is important because the

  17. Analog model for quantum gravity effects: phonons in random fluids.

    PubMed

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  18. RANDOM EVOLUTIONS, MARKOV CHAINS, AND SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS

    PubMed Central

    Griego, R. J.; Hersh, R.

    1969-01-01

    Several authors have considered Markov processes defined by the motion of a particle on a fixed line with a random velocity1, 6, 8, 10 or a random diffusivity.5, 12 A “random evolution” is a natural but apparently new generalization of this notion. In this note we hope to show that this concept leads to simple and powerful applications of probabilistic tools to initial-value problems of both parabolic and hyperbolic type. We obtain existence theorems, representation theorems, and asymptotic formulas, both old and new. PMID:16578690

  19. Predicting Eight Grade Students' Equation Solving Performances via Concepts of Variable and Equality

    ERIC Educational Resources Information Center

    Ertekin, Erhan

    2017-01-01

    This study focused on how two algebraic concepts- equality and variable- predicted 8th grade students' equation solving performance. In this study, predictive design as a correlational research design was used. Randomly selected 407 eight-grade students who were from the central districts of a city in the central region of Turkey participated in…

  20. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  1. Multiple Linking in Equating and Random Scale Drift. Research Report. ETS RR-11-46

    ERIC Educational Resources Information Center

    Guo, Hongwen; Liu, Jinghua; Dorans, Neil; Feigenbaum, Miriam

    2011-01-01

    Maintaining score stability is crucial for an ongoing testing program that administers several tests per year over many years. One way to stall the drift of the score scale is to use an equating design with multiple links. In this study, we use the operational and experimental SAT® data collected from 44 administrations to investigate the effect…

  2. Sample Invariance of the Structural Equation Model and the Item Response Model: A Case Study.

    ERIC Educational Resources Information Center

    Breithaupt, Krista; Zumbo, Bruno D.

    2002-01-01

    Evaluated the sample invariance of item discrimination statistics in a case study using real data, responses of 10 random samples of 500 people to a depression scale. Results lend some support to the hypothesized superiority of a two-parameter item response model over the common form of structural equation modeling, at least when responses are…

  3. Langevin Equation on Fractal Curves

    NASA Astrophysics Data System (ADS)

    Satin, Seema; Gangal, A. D.

    2016-07-01

    We analyze random motion of a particle on a fractal curve, using Langevin approach. This involves defining a new velocity in terms of mass of the fractal curve, as defined in recent work. The geometry of the fractal curve, plays an important role in this analysis. A Langevin equation with a particular model of noise is proposed and solved using techniques of the Fα-Calculus.

  4. Probability density function evolution of power systems subject to stochastic variation of renewable energy

    NASA Astrophysics Data System (ADS)

    Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.

    2018-05-01

    As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.

  5. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chevalier, Michael W., E-mail: Michael.Chevalier@ucsf.edu; El-Samad, Hana, E-mail: Hana.El-Samad@ucsf.edu

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation timesmore » of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.« less

  6. Functional response and capture timing in an individual-based model: predation by northern squawfish (Ptychocheilus oregonensis) on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, James H.; DeAngelis, Donald L.

    1992-01-01

    The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.

  7. Population density equations for stochastic processes with memory kernels

    NASA Astrophysics Data System (ADS)

    Lai, Yi Ming; de Kamps, Marc

    2017-06-01

    We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.

  8. Power of data mining methods to detect genetic associations and interactions.

    PubMed

    Molinaro, Annette M; Carriero, Nicholas; Bjornson, Robert; Hartge, Patricia; Rothman, Nathaniel; Chatterjee, Nilanjan

    2011-01-01

    Genetic association studies, thus far, have focused on the analysis of individual main effects of SNP markers. Nonetheless, there is a clear need for modeling epistasis or gene-gene interactions to better understand the biologic basis of existing associations. Tree-based methods have been widely studied as tools for building prediction models based on complex variable interactions. An understanding of the power of such methods for the discovery of genetic associations in the presence of complex interactions is of great importance. Here, we systematically evaluate the power of three leading algorithms: random forests (RF), Monte Carlo logic regression (MCLR), and multifactor dimensionality reduction (MDR). We use the algorithm-specific variable importance measures (VIMs) as statistics and employ permutation-based resampling to generate the null distribution and associated p values. The power of the three is assessed via simulation studies. Additionally, in a data analysis, we evaluate the associations between individual SNPs in pro-inflammatory and immunoregulatory genes and the risk of non-Hodgkin lymphoma. The power of RF is highest in all simulation models, that of MCLR is similar to RF in half, and that of MDR is consistently the lowest. Our study indicates that the power of RF VIMs is most reliable. However, in addition to tuning parameters, the power of RF is notably influenced by the type of variable (continuous vs. categorical) and the chosen VIM. Copyright © 2011 S. Karger AG, Basel.

  9. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2011-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  10. Probabilistic Simulation for Combined Cycle Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multifactor interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  11. Probabilistic Simulation of Combined Thermo-Mechanical Cyclic Fatigue in Composites

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A methodology to compute probabilistically-combined thermo-mechanical fatigue life of polymer matrix laminated composites has been developed and is demonstrated. Matrix degradation effects caused by long-term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress-dependent multifactor-interaction relationship developed at NASA Glenn Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability-integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability-based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/-45/90)s graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical-cyclic loads and low thermal-cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical-cyclic loads and high thermal-cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  12. Effect of Cyclic Thermo-Mechanical Loads on Fatigue Reliability in Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Murthy, P. L. N.; Chamis, C. C.

    1996-01-01

    A methodology to compute probabilistic fatigue life of polymer matrix laminated composites has been developed and demonstrated. Matrix degradation effects caused by long term environmental exposure and mechanical/thermal cyclic loads are accounted for in the simulation process. A unified time-temperature-stress dependent multi-factor interaction relationship developed at NASA Lewis Research Center has been used to model the degradation/aging of material properties due to cyclic loads. The fast probability integration method is used to compute probabilistic distribution of response. Sensitivities of fatigue life reliability to uncertainties in the primitive random variables (e.g., constituent properties, fiber volume ratio, void volume ratio, ply thickness, etc.) computed and their significance in the reliability- based design for maximum life is discussed. The effect of variation in the thermal cyclic loads on the fatigue reliability for a (0/+/- 45/90)(sub s) graphite/epoxy laminate with a ply thickness of 0.127 mm, with respect to impending failure modes has been studied. The results show that, at low mechanical cyclic loads and low thermal cyclic amplitudes, fatigue life for 0.999 reliability is most sensitive to matrix compressive strength, matrix modulus, thermal expansion coefficient, and ply thickness. Whereas at high mechanical cyclic loads and high thermal cyclic amplitudes, fatigue life at 0.999 reliability is more sensitive to the shear strength of matrix, longitudinal fiber modulus, matrix modulus, and ply thickness.

  13. Evaluation of Equations for Predicting 24-Hour Urinary Sodium Excretion from Casual Urine Samples in Asian Adults.

    PubMed

    Whitton, Clare; Gay, Gibson Ming Wei; Lim, Raymond Boon Tar; Tan, Linda Wei Lin; Lim, Wei-Yen; van Dam, Rob M

    2016-08-01

    The collection of 24-h urine samples for the estimation of sodium intake is burdensome, and the utility of spot urine samples in Southeast Asian populations is unclear. We aimed to assess the validity of prediction equations with the use of spot urine concentrations. A sample of 144 Singapore residents of Chinese, Malay, and Indian ethnicity aged 18-79 y were recruited from the Singapore Health 2 Study conducted in 2014. Participants collected urine for 24 h in multiple small bottles on a single day. To determine the optimal collection time for a spot urine sample, a 1-mL sample was taken from a random bottle collected in the morning, afternoon, and evening. Published equations and a newly derived equation were used to predict 24-h sodium excretion from spot urine samples. The mean ± SD concentration of sodium from the 24-h urine sample was 125 ± 53.4 mmol/d, which is equivalent to 7.2 ± 3.1 g salt. Bland-Altman plots showed good agreement at the group level between estimated and actual 24-h sodium excretion, with biases for the morning period of -3.5 mmol (95% CI: -14.8, 7.8 mmol; new equation) and 1.46 mmol (95% CI: -10.0, 13.0 mmol; Intersalt equation). A larger bias of 25.7 mmol (95% CI: 12.2, 39.3 mmol) was observed for the Tanaka equation in the morning period. The prediction accuracy did not differ significantly for spot urine samples collected at different times of the day or at a random time of day (P = 0.11-0.76). This study suggests that the application of both our own newly derived equation and the Intersalt equation to spot urine concentrations may be useful in predicting group means for 24-h sodium excretion in urban Asian populations. © 2016 American Society for Nutrition.

  14. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  15. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  16. Kinetic Models for Topological Nearest-Neighbor Interactions

    NASA Astrophysics Data System (ADS)

    Blanchet, Adrien; Degond, Pierre

    2017-12-01

    We consider systems of agents interacting through topological interactions. These have been shown to play an important part in animal and human behavior. Precisely, the system consists of a finite number of particles characterized by their positions and velocities. At random times a randomly chosen particle, the follower, adopts the velocity of its closest neighbor, the leader. We study the limit of a system size going to infinity and, under the assumption of propagation of chaos, show that the limit kinetic equation is a non-standard spatial diffusion equation for the particle distribution function. We also study the case wherein the particles interact with their K closest neighbors and show that the corresponding kinetic equation is the same. Finally, we prove that these models can be seen as a singular limit of the smooth rank-based model previously studied in Blanchet and Degond (J Stat Phys 163:41-60, 2016). The proofs are based on a combinatorial interpretation of the rank as well as some concentration of measure arguments.

  17. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  18. Integrodifferential formulations of the continuous-time random walk for solute transport subject to bimolecular A +B →0 reactions: From micro- to mesoscopic

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Berkowitz, Brian

    2015-03-01

    We develop continuous-time random walk (CTRW) equations governing the transport of two species that annihilate when in proximity to one another. In comparison with catalytic or spontaneous transformation reactions that have been previously considered in concert with CTRW, both species have spatially variant concentrations that require consideration. We develop two distinct formulations. The first treats transport and reaction microscopically, potentially capturing behavior at sharp fronts, but at the cost of being strongly nonlinear. The second, mesoscopic, formulation relies on a separation-of-scales technique we develop to separate microscopic-scale reaction and upscaled transport. This simplifies the governing equations and allows treatment of more general reaction dynamics, but requires stronger smoothness assumptions of the solution. The mesoscopic formulation is easily tractable using an existing solution from the literature (we also provide an alternative derivation), and the generalized master equation (GME) for particles undergoing A +B →0 reactions is presented. We show that this GME simplifies, under appropriate circumstances, to both the GME for the unreactive CTRW and to the advection-dispersion-reaction equation. An additional major contribution of this work is on the numerical side: to corroborate our development, we develop an indirect particle-tracking-partial-integro-differential-equation (PIDE) hybrid verification technique which could be applicable widely in reactive anomalous transport. Numerical simulations support the mesoscopic analysis.

  19. Derivation of the RPA (Random Phase Approximation) Equation of ATDDFT (Adiabatic Time Dependent Density Functional Ground State Response Theory) from an Excited State Variational Approach Based on the Ground State Functional.

    PubMed

    Ziegler, Tom; Krykunov, Mykhaylo; Autschbach, Jochen

    2014-09-09

    The random phase approximation (RPA) equation of adiabatic time dependent density functional ground state response theory (ATDDFT) has been used extensively in studies of excited states. It extracts information about excited states from frequency dependent ground state response properties and avoids, thus, in an elegant way, direct Kohn-Sham calculations on excited states in accordance with the status of DFT as a ground state theory. Thus, excitation energies can be found as resonance poles of frequency dependent ground state polarizability from the eigenvalues of the RPA equation. ATDDFT is approximate in that it makes use of a frequency independent energy kernel derived from the ground state functional. It is shown in this study that one can derive the RPA equation of ATDDFT from a purely variational approach in which stationary states above the ground state are located using our constricted variational DFT (CV-DFT) method and the ground state functional. Thus, locating stationary states above the ground state due to one-electron excitations with a ground state functional is completely equivalent to solving the RPA equation of TDDFT employing the same functional. The present study is an extension of a previous work in which we demonstrated the equivalence between ATDDFT and CV-DFT within the Tamm-Dancoff approximation.

  20. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    PubMed

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.

  1. How multiple factors control evapotranspiration in North America evergreen needleleaf forests.

    PubMed

    Chen, Yueming; Xue, Yueju; Hu, Yueming

    2018-05-01

    Identifying the factors dominating ecosystem water flux is a critical step for predicting evapotranspiration (ET). Here, the fuzzy rough set with binary shuffled frog leaping (BSFL-FRSA) was used to identify both individual factors and multi-factor combinations that dominate the half-hourly ET variation at evergreen needleleaf forests (ENFs) sites across three different climatic zones in the North America. Among 21factors, air temperature (TA), atmospheric CO 2 concentration (CCO 2 ), soil temperature (TS), soil water content (SWC) and net radiation (NETRAD) were evaluated as dominant single factors, contributed to the ET variation averaged for all ENF sites by 48%, 36%, 32%, 18% and 13%, respectively. While the importance order would vary with climatic zones, and TA was assessed as the most influential factor at a single climatic zone level, counting a contribution rate of 54.7%, 49.9%, and 38.6% in the subarctic, warm summer continental, and Mediterranean climatic zones, respectively. In view of impacts of each multi-factors combination on ET, both TA and CCO 2 made a contribution of 71% across three climate zones; the combination of TA, CCO 2 and NETRAD was evaluated the most dominant at Mediterranean and subarctic ENF sites, and the combination of TA, CCO 2 and TS at warm summer continental sites. Our results suggest that temperature was most critical for ET variation at the warm summer continental ENF. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Predicting bending stiffness of randomly oriented hybrid panels

    Treesearch

    Laura Moya; William T.Y. Tze; Jerrold E. Winandy

    2010-01-01

    This study was conducted to develop a simple model to predict the bending modulus of elasticity (MOE) of randomly oriented hybrid panels. The modeling process involved three modules: the behavior of a single layer was computed by applying micromechanics equations, layer properties were adjusted for densification effects, and the entire panel was modeled as a three-...

  3. A stochastic maximum principle for backward control systems with random default time

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Kuen Siu, Tak

    2013-05-01

    This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.

  4. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  5. Random Walk Method for Potential Problems

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Raju, I. S.

    2002-01-01

    A local Random Walk Method (RWM) for potential problems governed by Lapalace's and Paragon's equations is developed for two- and three-dimensional problems. The RWM is implemented and demonstrated in a multiprocessor parallel environment on a Beowulf cluster of computers. A speed gain of 16 is achieved as the number of processors is increased from 1 to 23.

  6. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  7. Persistent random walk of cells involving anomalous effects and random death

    NASA Astrophysics Data System (ADS)

    Fedotov, Sergei; Tan, Abby; Zubarev, Andrey

    2015-04-01

    The purpose of this paper is to implement a random death process into a persistent random walk model which produces sub-ballistic superdiffusion (Lévy walk). We develop a stochastic two-velocity jump model of cell motility for which the switching rate depends upon the time which the cell has spent moving in one direction. It is assumed that the switching rate is a decreasing function of residence (running) time. This assumption leads to the power law for the velocity switching time distribution. This describes the anomalous persistence of cell motility: the longer the cell moves in one direction, the smaller the switching probability to another direction becomes. We derive master equations for the cell densities with the generalized switching terms involving the tempered fractional material derivatives. We show that the random death of cells has an important implication for the transport process through tempering of the superdiffusive process. In the long-time limit we write stationary master equations in terms of exponentially truncated fractional derivatives in which the rate of death plays the role of tempering of a Lévy jump distribution. We find the upper and lower bounds for the stationary profiles corresponding to the ballistic transport and diffusion with the death-rate-dependent diffusion coefficient. Monte Carlo simulations confirm these bounds.

  8. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  9. The Effectiveness of a Computer-Assisted Instruction Package in Supplementing Teaching of Selected Concepts in High School Chemistry: Writing Formulas and Balancing Chemical Equations.

    ERIC Educational Resources Information Center

    Wainwright, Camille L.

    Four classes of high school chemistry students (N=108) were randomly assigned to experimental and control groups to investigate the effectiveness of a computer assisted instruction (CAI) package during a unit on writing/naming of chemical formulas and balancing equations. Students in the experimental group received drill, review, and reinforcement…

  10. Using an EM Covariance Matrix to Estimate Structural Equation Models with Missing Data: Choosing an Adjusted Sample Size to Improve the Accuracy of Inferences

    ERIC Educational Resources Information Center

    Enders, Craig K.; Peugh, James L.

    2004-01-01

    Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…

  11. Persistent fluctuations in synchronization rate in globally coupled oscillators with periodic external forcing

    NASA Astrophysics Data System (ADS)

    Atsumi, Yu; Nakao, Hiroya

    2012-05-01

    A system of phase oscillators with repulsive global coupling and periodic external forcing undergoing asynchronous rotation is considered. The synchronization rate of the system can exhibit persistent fluctuations depending on parameters and initial phase distributions, and the amplitude of the fluctuations scales with the system size for uniformly random initial phase distributions. Using the Watanabe-Strogatz transformation that reduces the original system to low-dimensional macroscopic equations, we show that the fluctuations are collective dynamics of the system corresponding to low-dimensional trajectories of the reduced equations. It is argued that the amplitude of the fluctuations is determined by the inhomogeneity of the initial phase distribution, resulting in system-size scaling for the random case.

  12. Time domain simulation of the response of geometrically nonlinear panels subjected to random loading

    NASA Technical Reports Server (NTRS)

    Moyer, E. Thomas, Jr.

    1988-01-01

    The response of composite panels subjected to random pressure loads large enough to cause geometrically nonlinear responses is studied. A time domain simulation is employed to solve the equations of motion. An adaptive time stepping algorithm is employed to minimize intermittent transients. A modified algorithm for the prediction of response spectral density is presented which predicts smooth spectral peaks for discrete time histories. Results are presented for a number of input pressure levels and damping coefficients. Response distributions are calculated and compared with the analytical solution of the Fokker-Planck equations. RMS response is reported as a function of input pressure level and damping coefficient. Spectral densities are calculated for a number of examples.

  13. Analytic Interatomic Forces in the Random Phase Approximation

    NASA Astrophysics Data System (ADS)

    Ramberger, Benjamin; Schäfer, Tobias; Kresse, Georg

    2017-03-01

    We discuss that in the random phase approximation (RPA) the first derivative of the energy with respect to the Green's function is the self-energy in the G W approximation. This relationship allows us to derive compact equations for the RPA interatomic forces. We also show that position dependent overlap operators are elegantly incorporated in the present framework. The RPA force equations have been implemented in the projector augmented wave formalism, and we present illustrative applications, including ab initio molecular dynamics simulations, the calculation of phonon dispersion relations for diamond and graphite, as well as structural relaxations for water on boron nitride. The present derivation establishes a concise framework for forces within perturbative approaches and is also applicable to more involved approximations for the correlation energy.

  14. Mean-Potential Law in Evolutionary Games

    NASA Astrophysics Data System (ADS)

    Nałecz-Jawecki, Paweł; Miekisz, Jacek

    2018-01-01

    The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1 /3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

  15. Model dynamics for quantum computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank

    2017-08-01

    A model master equation suitable for quantum computing dynamics is presented. In an ideal quantum computer (QC), a system of qubits evolves in time unitarily and, by virtue of their entanglement, interfere quantum mechanically to solve otherwise intractable problems. In the real situation, a QC is subject to decoherence and attenuation effects due to interaction with an environment and with possible short-term random disturbances and gate deficiencies. The stability of a QC under such attacks is a key issue for the development of realistic devices. We assume that the influence of the environment can be incorporated by a master equation that includes unitary evolution with gates, supplemented by a Lindblad term. Lindblad operators of various types are explored; namely, steady, pulsed, gate friction, and measurement operators. In the master equation, we use the Lindblad term to describe short time intrusions by random Lindblad pulses. The phenomenological master equation is then extended to include a nonlinear Beretta term that describes the evolution of a closed system with increasing entropy. An external Bath environment is stipulated by a fixed temperature in two different ways. Here we explore the case of a simple one-qubit system in preparation for generalization to multi-qubit, qutrit and hybrid qubit-qutrit systems. This model master equation can be used to test the stability of memory and the efficacy of quantum gates. The properties of such hybrid master equations are explored, with emphasis on the role of thermal equilibrium and entropy constraints. Several significant properties of time-dependent qubit evolution are revealed by this simple study.

  16. Energetic Consistency and Coupling of the Mean and Covariance Dynamics

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.

    2008-01-01

    The dynamical state of the ocean and atmosphere is taken to be a large dimensional random vector in a range of large-scale computational applications, including data assimilation, ensemble prediction, sensitivity analysis, and predictability studies. In each of these applications, numerical evolution of the covariance matrix of the random state plays a central role, because this matrix is used to quantify uncertainty in the state of the dynamical system. Since atmospheric and ocean dynamics are nonlinear, there is no closed evolution equation for the covariance matrix, nor for the mean state. Therefore approximate evolution equations must be used. This article studies theoretical properties of the evolution equations for the mean state and covariance matrix that arise in the second-moment closure approximation (third- and higher-order moment discard). This approximation was introduced by EPSTEIN [1969] in an early effort to introduce a stochastic element into deterministic weather forecasting, and was studied further by FLEMING [1971a,b], EPSTEIN and PITCHER [1972], and PITCHER [1977], also in the context of atmospheric predictability. It has since fallen into disuse, with a simpler one being used in current large-scale applications. The theoretical results of this article make a case that this approximation should be reconsidered for use in large-scale applications, however, because the second moment closure equations possess a property of energetic consistency that the approximate equations now in common use do not possess. A number of properties of solutions of the second-moment closure equations that result from this energetic consistency will be established.

  17. Modeling tracer transport in randomly heterogeneous porous media by nonlocal moment equations: Anomalous transport

    NASA Astrophysics Data System (ADS)

    Morales-Casique, E.; Lezama-Campos, J. L.; Guadagnini, A.; Neuman, S. P.

    2013-05-01

    Modeling tracer transport in geologic porous media suffers from the corrupt characterization of the spatial distribution of hydrogeologic properties of the system and the incomplete knowledge of processes governing transport at multiple scales. Representations of transport dynamics based on a Fickian model of the kind considered in the advection-dispersion equation (ADE) fail to capture (a) the temporal variation associated with the rate of spreading of a tracer, and (b) the distribution of early and late arrival times which are often observed in field and/or laboratory scenarios and are considered as the signature of anomalous transport. Elsewhere we have presented exact stochastic moment equations to model tracer transport in randomly heterogeneous aquifers. We have also developed a closure scheme which enables one to provide numerical solutions of such moment equations at different orders of approximations. The resulting (ensemble) average and variance of concentration fields were found to display a good agreement against Monte Carlo - based simulation results for mildly heterogeneous (or well-conditioned strongly heterogeneous) media. Here we explore the ability of the moment equations approach to describe the distribution of early arrival times and late time tailing effects which can be observed in Monte-Carlo based breakthrough curves (BTCs) of the (ensemble) mean concentration. We show that BTCs of mean resident concentration calculated at a fixed space location through higher-order approximations of moment equations display long tailing features of the kind which is typically associated with anomalous transport behavior and are not represented by an ADE model with constant dispersive parameter, such as the zero-order approximation.

  18. Stability of transition waves and positive entire solutions of Fisher-KPP equations with time and space dependence

    NASA Astrophysics Data System (ADS)

    Shen, Wenxian

    2017-09-01

    This paper is concerned with the stability of transition waves and strictly positive entire solutions of random and nonlocal dispersal evolution equations of Fisher-KPP type with general time and space dependence, including time and space periodic or almost periodic dependence as special cases. We first show the existence, uniqueness, and stability of strictly positive entire solutions of such equations. Next, we show the stability of uniformly continuous transition waves connecting the unique strictly positive entire solution and the trivial solution zero and satisfying certain decay property at the end close to the trivial solution zero (if it exists). The existence of transition waves has been studied in Liang and Zhao (2010 J. Funct. Anal. 259 857-903), Nadin (2009 J. Math. Pures Appl. 92 232-62), Nolen et al (2005 Dyn. PDE 2 1-24), Nolen and Xin (2005 Discrete Contin. Dyn. Syst. 13 1217-34) and Weinberger (2002 J. Math. Biol. 45 511-48) for random dispersal Fisher-KPP equations with time and space periodic dependence, in Nadin and Rossi (2012 J. Math. Pures Appl. 98 633-53), Nadin and Rossi (2015 Anal. PDE 8 1351-77), Nadin and Rossi (2017 Arch. Ration. Mech. Anal. 223 1239-67), Shen (2010 Trans. Am. Math. Soc. 362 5125-68), Shen (2011 J. Dynam. Differ. Equ. 23 1-44), Shen (2011 J. Appl. Anal. Comput. 1 69-93), Tao et al (2014 Nonlinearity 27 2409-16) and Zlatoš (2012 J. Math. Pures Appl. 98 89-102) for random dispersal Fisher-KPP equations with quite general time and/or space dependence, and in Coville et al (2013 Ann. Inst. Henri Poincare 30 179-223), Rawal et al (2015 Discrete Contin. Dyn. Syst. 35 1609-40) and Shen and Zhang (2012 Comm. Appl. Nonlinear Anal. 19 73-101) for nonlocal dispersal Fisher-KPP equations with time and/or space periodic dependence. The stability result established in this paper implies that the transition waves obtained in many of the above mentioned papers are asymptotically stable for well-fitted perturbation. Up to the author’s knowledge, it is the first time that the stability of transition waves of Fisher-KPP equations with general time and space dependence is studied.

  19. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  20. Entanglement dynamics in random media

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.; Zarro, C. A. D.

    2017-12-01

    We study how the entanglement dynamics between two-level atoms is impacted by random fluctuations of the light cone. In our model the two-atom system is envisaged as an open system coupled with an electromagnetic field in the vacuum state. We employ the quantum master equation in the Born-Markov approximation in order to describe the completely positive time evolution of the atomic system. We restrict our investigations to the situation in which the atoms are coupled individually to two spatially separated cavities, one of which displays the emergence of light-cone fluctuations. In such a disordered cavity, we assume that the coefficients of the Klein-Gordon equation are random functions of the spatial coordinates. The disordered medium is modeled by a centered, stationary, and Gaussian process. We demonstrate that disorder has the effect of slowing down the entanglement decay. We conjecture that in a strong-disorder environment the mean life of entangled states can be enhanced in such a way as to almost completely suppress quantum nonlocal decoherence.

  1. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  2. Random catalytic reaction networks

    NASA Astrophysics Data System (ADS)

    Stadler, Peter F.; Fontana, Walter; Miller, John H.

    1993-03-01

    We study networks that are a generalization of replicator (or Lotka-Volterra) equations. They model the dynamics of a population of object types whose binary interactions determine the specific type of interaction product. Such a system always reduces its dimension to a subset that contains production pathways for all of its members. The network equation can be rewritten at a level of collectives in terms of two basic interaction patterns: replicator sets and cyclic transformation pathways among sets. Although the system contains well-known cases that exhibit very complicated dynamics, the generic behavior of randomly generated systems is found (numerically) to be extremely robust: convergence to a globally stable rest point. It is easy to tailor networks that display replicator interactions where the replicators are entire self-sustaining subsystems, rather than structureless units. A numerical scan of random systems highlights the special properties of elementary replicators: they reduce the effective interconnectedness of the system, resulting in enhanced competition, and strong correlations between the concentrations.

  3. Mean dyadic Green's function for a two layer random medium

    NASA Technical Reports Server (NTRS)

    Zuniga, M. A.

    1981-01-01

    The mean dyadic Green's function for a two-layer random medium with arbitrary three-dimensional correlation functions has been obtained with the zeroth-order solution to the Dyson equation by applying the nonlinear approximation. The propagation of the coherent wave in the random medium is similar to that in an anisotropic medium with different propagation constants for the characteristic transverse electric and transverse magnetic polarizations. In the limit of a laminar structure, two propagation constants for each polarization are found to exist.

  4. [Clinical genealogy and genetic-mathematical study of families of probands with uterine cancer in the Chernovitsy Region].

    PubMed

    Galina, K P; Peresun'ko, A P; Glushchenko, N N

    2001-01-01

    Complex clinic-genealogical and genetic-mathematical investigation of 482 patients with uterus cancer from Chernovtsy region was carried out. It was proved that primary in the population is multifactoral origin of uterus cancer. Percentage of genetic component in general susceptibility to disease was 11.40 9.40. Recurrent risk of the malignant tumor in progeny has been estimated. Results of the investigation are the base for development and execution of uterus cancer precaution and segregated with it oncopathology in proband relatives.

  5. Continuous Cultivation for Apparent Optimization of Defined Media for Cellulomonas sp. and Bacillus cereus

    PubMed Central

    Summers, R. J.; Boudreaux, D. P.; Srinivasan, V. R.

    1979-01-01

    Steady-state continuous culture was used to optimize lean chemically defined media for a Cellulomonas sp. and Bacillus cereus strain T. Both organisms were extremely sensitive to variations in trace-metal concentrations. However, medium optimization by this technique proved rapid, and multifactor screening was easily conducted by using a minimum of instrumentation. The optimized media supported critical dilution rates of 0.571 and 0.467 h−1 for Cellulomonas and Bacillus, respectively. These values approximated maximum growth rate values observed in batch culture. PMID:16345417

  6. Assessment Methods of Groundwater Overdraft Area and Its Application

    NASA Astrophysics Data System (ADS)

    Dong, Yanan; Xing, Liting; Zhang, Xinhui; Cao, Qianqian; Lan, Xiaoxun

    2018-05-01

    Groundwater is an important source of water, and long-term large demand make groundwater over-exploited. Over-exploitation cause a lot of environmental and geological problems. This paper explores the concept of over-exploitation area, summarizes the natural and social attributes of over-exploitation area, as well as expounds its evaluation methods, including single factor evaluation, multi-factor system analysis and numerical method. At the same time, the different methods are compared and analyzed. And then taking Northern Weifang as an example, this paper introduces the practicality of appraisal method.

  7. A Simplified Treatment of Brownian Motion and Stochastic Differential Equations Arising in Financial Mathematics

    ERIC Educational Resources Information Center

    Parlar, Mahmut

    2004-01-01

    Brownian motion is an important stochastic process used in modelling the random evolution of stock prices. In their 1973 seminal paper--which led to the awarding of the 1997 Nobel prize in Economic Sciences--Fischer Black and Myron Scholes assumed that the random stock price process is described (i.e., generated) by Brownian motion. Despite its…

  8. On the numbers of images of two stochastic gravitational lensing models

    NASA Astrophysics Data System (ADS)

    Wei, Ang

    2017-02-01

    We study two gravitational lensing models with Gaussian randomness: the continuous mass fluctuation model and the floating black hole model. The lens equations of these models are related to certain random harmonic functions. Using Rice's formula and Gaussian techniques, we obtain the expected numbers of zeros of these functions, which indicate the amounts of images in the corresponding lens systems.

  9. Random element method for numerical modeling of diffusional processes

    NASA Technical Reports Server (NTRS)

    Ghoniem, A. F.; Oppenheim, A. K.

    1982-01-01

    The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.

  10. Subharmonic response of a single-degree-of-freedom nonlinear vibro-impact system to a narrow-band random excitation.

    PubMed

    Haiwu, Rong; Wang, Xiangdong; Xu, Wei; Fang, Tong

    2009-08-01

    The subharmonic response of single-degree-of-freedom nonlinear vibro-impact oscillator with a one-sided barrier to narrow-band random excitation is investigated. The narrow-band random excitation used here is a filtered Gaussian white noise. The analysis is based on a special Zhuravlev transformation, which reduces the system to one without impacts, or velocity jumps, thereby permitting the applications of asymptotic averaging over the "fast" variables. The averaged stochastic equations are solved exactly by the method of moments for the mean-square response amplitude for the case of linear system with zero offset. A perturbation-based moment closure scheme is proposed and the formula of the mean-square amplitude is obtained approximately for the case of linear system with nonzero offset. The perturbation-based moment closure scheme is used once again to obtain the algebra equation of the mean-square amplitude of the response for the case of nonlinear system. The effects of damping, detuning, nonlinear intensity, bandwidth, and magnitudes of random excitations are analyzed. The theoretical analyses are verified by numerical results. Theoretical analyses and numerical simulations show that the peak amplitudes may be strongly reduced at large detunings or large nonlinear intensity.

  11. Lotka-Volterra system in a random environment.

    PubMed

    Dimentberg, Mikhail F

    2002-03-01

    Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic "damping" term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent gamma-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.

  12. Lotka-Volterra system in a random environment

    NASA Astrophysics Data System (ADS)

    Dimentberg, Mikhail F.

    2002-03-01

    Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic ``damping'' term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent γ-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.

  13. Multiscale functions, scale dynamics, and applications to partial differential equations

    NASA Astrophysics Data System (ADS)

    Cresson, Jacky; Pierret, Frédéric

    2016-05-01

    Modeling phenomena from experimental data always begins with a choice of hypothesis on the observed dynamics such as determinism, randomness, and differentiability. Depending on these choices, different behaviors can be observed. The natural question associated to the modeling problem is the following: "With a finite set of data concerning a phenomenon, can we recover its underlying nature? From this problem, we introduce in this paper the definition of multi-scale functions, scale calculus, and scale dynamics based on the time scale calculus [see Bohner, M. and Peterson, A., Dynamic Equations on Time Scales: An Introduction with Applications (Springer Science & Business Media, 2001)] which is used to introduce the notion of scale equations. These definitions will be illustrated on the multi-scale Okamoto's functions. Scale equations are analysed using scale regimes and the notion of asymptotic model for a scale equation under a particular scale regime. The introduced formalism explains why a single scale equation can produce distinct continuous models even if the equation is scale invariant. Typical examples of such equations are given by the scale Euler-Lagrange equation. We illustrate our results using the scale Newton's equation which gives rise to a non-linear diffusion equation or a non-linear Schrödinger equation as asymptotic continuous models depending on the particular fractional scale regime which is considered.

  14. Stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobczyk, K.

    1990-01-01

    This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less

  15. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, John; Chorin, Alexandre J.; Crutchfield, William

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  16. The functional implications of motor, cognitive, psychiatric, and social problem-solving states in Huntington's disease.

    PubMed

    Van Liew, Charles; Gluhm, Shea; Goldstein, Jody; Cronan, Terry A; Corey-Bloom, Jody

    2013-01-01

    Huntington's disease (HD) is a genetic, neurodegenerative disorder characterized by motor, cognitive, and psychiatric dysfunction. In HD, the inability to solve problems successfully affects not only disease coping, but also interpersonal relationships, judgment, and independent living. The aim of the present study was to examine social problem-solving (SPS) in well-characterized HD and at-risk (AR) individuals and to examine its unique and conjoint effects with motor, cognitive, and psychiatric states on functional ratings. Sixty-three participants, 31 HD and 32 gene-positive AR, were included in the study. Participants completed the Social Problem-Solving Inventory-Revised: Long (SPSI-R:L), a 52-item, reliable, standardized measure of SPS. Items are aggregated under five scales (Positive, Negative, and Rational Problem-Solving; Impulsivity/Carelessness and Avoidance Styles). Participants also completed the Unified Huntington's Disease Rating Scale functional, behavioral, and cognitive assessments, as well as additional neuropsychological examinations and the Symptom Checklist-90-Revised (SCL-90R). A structural equation model was used to examine the effects of motor, cognitive, psychiatric, and SPS states on functionality. The multifactor structural model fit well descriptively. Cognitive and motor states uniquely and significantly predicted function in HD; however, neither psychiatric nor SPS states did. SPS was, however, significantly related to motor, cognitive, and psychiatric states, suggesting that it may bridge the correlative gap between psychiatric and cognitive states in HD. SPS may be worth assessing in conjunction with the standard gamut of clinical assessments in HD. Suggestions for future research and implications for patients, families, caregivers, and clinicians are discussed.

  17. [Spatial distribution of birth defects among children aged 0 to 5 years and its relationship with soil chemical elements in Chongqing].

    PubMed

    Dong, Yan; Zhong, Zhao-hui; Li, Hong; Li, Jie; Wang, Ying-xiong; Peng, Bin; Zhang, Mao-zhong; Huang, Qiao; Yan, Ju; Xu, Fei-long

    2013-10-01

    To explore the correlation between the incidence of birth defects and the contents of soil elements so as to provide a scientific basis for screening the related pathogenic factors that inducing birth defects for the development of related preventive and control strategies. MapInfo 7.0 software was used to draw the maps on spatial distribution regarding the incidence rates of birth defects and the contents of 11 chemical elements in soil in the 33 studied areas. Variables on the two maps were superposed for analyzing the spatial correlation. SAS 8.0 software was used to analyze single factor, multi-factors and principal components as well as to comprehensively evaluate the degrees of relevance. Different incidence rates of birth defects showed in the maps of spatial distribution presented certain degrees of negative correlation with anomalies of soil chemical elements, including copper, chrome, iodine, selenium, zinc while positively correlated with the levels of lead. Results from the principal component regression equation indicating that the contents of copper(0.002), arsenic(-0.07), cadmium(0.05), chrome (-0.001), zinc (0.001), iodine(-0.03), lead (0.08), fluorine(-0.002)might serve as important factors that related to the prevalence of birth defects. Through the study on spatial distribution, we noticed that the incidence rates of birth defects were related to the contents of copper, chrome, iodine, selenium, zinc, lead in soil while the contents of chrome, iodine and lead might lead to the occurrence of birth defects.

  18. Does the North Staffordshire slot system control demand of orthopaedic referrals from primary care?

    PubMed Central

    Bridgman, Stephen; Li, Xuefang; Mackenzie, Gilbert; Dawes, Peter

    2005-01-01

    Background Attempts to manage general practice demand for orthopaedic outpatient consultations have been made in several areas of the NHS, with little robust evidence on whether or not they work. Aim To evaluate the effect of the North Staffordshire ‘orthopaedic slot system’ on the demand for general practice referrals to orthopaedic outpatients. Method A prospective study of 12 general practices in the slot system, 24 controls, and the 63 other general practices in North Staffordshire. Comparison periods were the baseline year (0); the first calendar year (1); and the first half of the second calendar year (2). A multifactor linear regression model was used. Results Mean referral rate decreased 22% in the slot group in period 1, and was maintained in period 2 (9.40, 7.29, 7.31 referrals per 10 000 population per month for periods 0, 1 and 2, respectively). The control and other groups showed a small decrease in period 1, but in period 2 higher referral rates were observed. The reduction in referrals of 20–40% in participating practices compared to other practices equates to 2–4 referrals per 10 000 patients per month. Conclusions Our study suggests that practices willing and able to take up an offer of a slot system for managing their orthopaedic referrals will be able to significantly reduce referral rates for their patients when compared to similar practices who do not. Further research on the generalisability, effectiveness and cost-effectiveness of such systems is warranted. PMID:16176738

  19. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  20. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  1. Investigating market efficiency through a forecasting model based on differential equations

    NASA Astrophysics Data System (ADS)

    de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco

    2017-05-01

    A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibiryakov, B. P., E-mail: sibiryakovbp@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk, 630090

    This paper studies properties of a continuum with structure. The characteristic size of the structure governs the fact that difference relations are nonautomatically transformed into differential ones. It is impossible to consider an infinitesimal volume of a body, to which the major conservation laws could be applied, because the minimum representative volume of the body must contain at least a few elementary microstructures. The corresponding equations of motion are equations of infinite order, solutions of which include, along with usual sound waves, unusual waves with abnormally low velocities without a lower limit. It is shown that in such media weakmore » perturbations can increase or decrease outside the limits. The number of complex roots of the corresponding dispersion equation, which can be interpreted as the number of unstable solutions, depends on the specific surface of cracks and is an almost linear dependence on a logarithmic scale, as in the seismological Gutenberg–Richter law. If the distance between one pore (crack) to another one is a random value with some distribution, we must write another dispersion equation and examine different scenarios depending on the statistical characteristics of the random distribution. In this case, there are sufficient deviations from the Gutenberg–Richter law and this theoretical result corresponds to some field and laboratory observations.« less

  3. Modelling Evolutionary Algorithms with Stochastic Differential Equations.

    PubMed

    Heredia, Jorge Pérez

    2017-11-20

    There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.

  4. Stochastic uncertainty analysis for solute transport in randomly heterogeneous media using a Karhunen‐Loève‐based moment equation approach

    USGS Publications Warehouse

    Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao

    2007-01-01

    A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen‐Loève‐based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen‐Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three‐Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two‐dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.

  5. Complete Numerical Solution of the Diffusion Equation of Random Genetic Drift

    PubMed Central

    Zhao, Lei; Yue, Xingye; Waxman, David

    2013-01-01

    A numerical method is presented to solve the diffusion equation for the random genetic drift that occurs at a single unlinked locus with two alleles. The method was designed to conserve probability, and the resulting numerical solution represents a probability distribution whose total probability is unity. We describe solutions of the diffusion equation whose total probability is unity as complete. Thus the numerical method introduced in this work produces complete solutions, and such solutions have the property that whenever fixation and loss can occur, they are automatically included within the solution. This feature demonstrates that the diffusion approximation can describe not only internal allele frequencies, but also the boundary frequencies zero and one. The numerical approach presented here constitutes a single inclusive framework from which to perform calculations for random genetic drift. It has a straightforward implementation, allowing it to be applied to a wide variety of problems, including those with time-dependent parameters, such as changing population sizes. As tests and illustrations of the numerical method, it is used to determine: (i) the probability density and time-dependent probability of fixation for a neutral locus in a population of constant size; (ii) the probability of fixation in the presence of selection; and (iii) the probability of fixation in the presence of selection and demographic change, the latter in the form of a changing population size. PMID:23749318

  6. Chapman-Enskog expansion for the Vicsek model of self-propelled particles

    NASA Astrophysics Data System (ADS)

    Ihle, Thomas

    2016-08-01

    Using the standard Vicsek model, I show how the macroscopic transport equations can be systematically derived from microscopic collision rules. The approach starts with the exact evolution equation for the N-particle probability distribution and, after making the mean-field assumption of molecular chaos, leads to a multi-particle Enskog-type equation. This equation is treated by a non-standard Chapman-Enskog expansion to extract the macroscopic behavior. The expansion includes terms up to third order in a formal expansion parameter ɛ, and involves a fast time scale. A self-consistent closure of the moment equations is presented that leads to a continuity equation for the particle density and a Navier-Stokes-like equation for the momentum density. Expressions for all transport coefficients in these macroscopic equations are given explicitly in terms of microscopic parameters of the model. The transport coefficients depend on specific angular integrals which are evaluated asymptotically in the limit of infinitely many collision partners, using an analogy to a random walk. The consistency of the Chapman-Enskog approach is checked by an independent calculation of the shear viscosity using a Green-Kubo relation.

  7. Additive noise-induced Turing transitions in spatial systems with application to neural fields and the Swift Hohenberg equation

    NASA Astrophysics Data System (ADS)

    Hutt, Axel; Longtin, Andre; Schimansky-Geier, Lutz

    2008-05-01

    This work studies the spatio-temporal dynamics of a generic integral-differential equation subject to additive random fluctuations. It introduces a combination of the stochastic center manifold approach for stochastic differential equations and the adiabatic elimination for Fokker-Planck equations, and studies analytically the systems’ stability near Turing bifurcations. In addition two types of fluctuation are studied, namely fluctuations uncorrelated in space and time, and global fluctuations, which are constant in space but uncorrelated in time. We show that the global fluctuations shift the Turing bifurcation threshold. This shift is proportional to the fluctuation variance. Applications to a neural field equation and the Swift-Hohenberg equation reveal the shift of the bifurcation to larger control parameters, which represents a stabilization of the system. All analytical results are confirmed by numerical simulations of the occurring mode equations and the full stochastic integral-differential equation. To gain some insight into experimental manifestations, the sum of uncorrelated and global additive fluctuations is studied numerically and the analytical results on global fluctuations are confirmed qualitatively.

  8. Numerical optimization using flow equations.

    PubMed

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  9. Numerical optimization using flow equations

    NASA Astrophysics Data System (ADS)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  10. A strictly Markovian expansion for plasma turbulence theory

    NASA Technical Reports Server (NTRS)

    Jones, F. C.

    1976-01-01

    The collision operator that appears in the equation of motion for a particle distribution function that was averaged over an ensemble of random Hamiltonians is non-Markovian. It is non-Markovian in that it involves a propagated integral over the past history of the ensemble averaged distribution function. All formal expansions of this nonlinear collision operator to date preserve this non-Markovian character term by term yielding an integro-differential equation that must be converted to a diffusion equation by an additional approximation. An expansion is derived for the collision operator that is strictly Markovian to any finite order and yields a diffusion equation as the lowest nontrivial order. The validity of this expansion is seen to be the same as that of the standard quasilinear expansion.

  11. Solving large test-day models by iteration on data and preconditioned conjugate gradient.

    PubMed

    Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A

    1999-12-01

    A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less

  13. Epidemics in networks: a master equation approach

    NASA Astrophysics Data System (ADS)

    Cotacallapa, M.; Hase, M. O.

    2016-02-01

    A problem closely related to epidemiology, where a subgraph of ‘infected’ links is defined inside a larger network, is investigated. This subgraph is generated from the underlying network by a random variable, which decides whether a link is able to propagate a disease/information. The relaxation timescale of this random variable is examined in both annealed and quenched limits, and the effectiveness of propagation of disease/information is analyzed. The dynamics of the model is governed by a master equation and two types of underlying network are considered: one is scale-free and the other has exponential degree distribution. We have shown that the relaxation timescale of the contagion variable has a major influence on the topology of the subgraph of infected links, which determines the efficiency of spreading of disease/information over the network.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chong

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energymore » exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.« less

  15. Sonic boom interaction with turbulence

    NASA Technical Reports Server (NTRS)

    Rusak, Zvi; Giddings, Thomas E.

    1994-01-01

    A recently developed transonic small-disturbance model is used to analyze the interactions of random disturbances with a weak shock. The model equation has an extended form of the classic small-disturbance equation for unsteady transonic aerodynamics. It shows that diffraction effects, nonlinear steepening effects, focusing and caustic effects and random induced vorticity fluctuations interact simultaneously to determine the development of the shock wave in space and time and the pressure field behind it. A finite-difference algorithm to solve the mixed-type elliptic hyperbolic flows around the shock wave is presented. Numerical calculations of shock wave interactions with various deterministic vorticity and temperature disturbances result in complicate shock wave structures and describe peaked as well as rounded pressure signatures behind the shock front, as were recorded in experiments of sonic booms running through atmospheric turbulence.

  16. Mean-Potential Law in Evolutionary Games.

    PubMed

    Nałęcz-Jawecki, Paweł; Miękisz, Jacek

    2018-01-12

    The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1/3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

  17. Comparison of the prevalence and mortality risk of CKD in Australia using the CKD Epidemiology Collaboration (CKD-EPI) and Modification of Diet in Renal Disease (MDRD) Study GFR estimating equations: the AusDiab (Australian Diabetes, Obesity and Lifestyle) Study.

    PubMed

    White, Sarah L; Polkinghorne, Kevan R; Atkins, Robert C; Chadban, Steven J

    2010-04-01

    The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) is more accurate than the Modification of Diet in Renal Disease (MDRD) Study equation. We applied both equations in a cohort representative of the Australian adult population. Population-based cohort study. 11,247 randomly selected noninstitutionalized Australians aged >or= 25 years who attended a physical examination during the baseline AusDiab (Australian Diabetes, Obesity and Lifestyle) Study survey. Glomerular filtration rate (GFR) was estimated using the MDRD Study and CKD-EPI equations. Kidney damage was defined as urine albumin-creatinine ratio >or= 2.5 mg/mmol in men and >or= 3.5 mg/mmol in women or urine protein-creatinine ratio >or= 0.20 mg/mg. Chronic kidney disease (CKD) was defined as estimated GFR (eGFR) >or= 60 mL/min/1.73 m(2) or kidney damage. Participants were classified into 3 mutually exclusive subgroups: CKD according to both equations; CKD according to the MDRD Study equation, but no CKD according to the CKD-EPI equation; and no CKD according to both equations. All-cause mortality was examined in subgroups with and without CKD. Serum creatinine and urinary albumin, protein, and creatinine measured on a random spot morning urine sample. 266 participants identified as having CKD according to the MDRD Study equation were reclassified to no CKD according to the CKD-EPI equation (estimated prevalence, 1.9%; 95% CI, 1.4-2.6). All had an eGFR >or= 45 mL/min/1.73 m(2) using the MDRD Study equation. Reclassified individuals were predominantly women with a favorable cardiovascular risk profile. The proportion of reclassified individuals with a Framingham-predicted 10-year cardiovascular risk >or= 30% was 7.2% compared with 7.9% of the group with no CKD according to both equations and 45.3% of individuals retained in stage 3a using both equations. There was no evidence of increased all-cause mortality in the reclassified group (age- and sex-adjusted hazard ratio vs no CKD, 1.01; 95% CI, 0.62-1.97). Using the MDRD Study equation, the prevalence of CKD in the Australian population aged >or= 25 years was 13.4% (95% CI, 11.1-16.1). Using the CKD-EPI equation, the prevalence was 11.5% (95% CI, 9.42-14.1). Single measurements of serum creatinine and urinary markers. The lower estimated prevalence of CKD using the CKD-EPI equation is caused by reclassification of low-risk individuals. Copyright 2010 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  18. An Alternative to the Stay/Switch Equation Assessed When Using a Changeover-Delay

    PubMed Central

    MacDonall, James S.

    2015-01-01

    An alternative to the generalized matching equation for understanding concurrent performances is the stay/switch model. For the stay/switch model, the important events are the contingencies and behaviors at each alternative. The current experiment compares the descriptions by two stay/switch equations, the original, empirically derived stay/switch equation and a more theoretically derived equation based on ratios of stay to switch responses matching ratios of stay to switch reinforcers. The present experiment compared descriptions by the original stay/switch equation when using and not using a changeover delay. It also compared descriptions by the more theoretical equation with and without a changeover delay. Finally, it compared descriptions of the concurrent performances by these two equations. Rats were trained in 15 conditions on identical concurrent random-interval schedules in each component of a multiple schedule. A COD operated in only one component. There were no consistent differences in the variance accounted for by each equation of concurrent performances whether or not a COD was used. The simpler equation found greater sensitivity to stay than to switch reinforcers. It also found a COD eliminated the influence of switch reinforcers. Because estimates of parameters were more meaningful when using the more theoretical stay/switch equation it is preferred. PMID:26299548

  19. An alternative to the stay/switch equation assessed when using a changeover-delay.

    PubMed

    MacDonall, James S

    2015-11-01

    An alternative to the generalized matching equation for understanding concurrent performances is the stay/switch model. For the stay/switch model, the important events are the contingencies and behaviors at each alternative. The current experiment compares the descriptions by two stay/switch equations, the original, empirically derived stay/switch equation and a more theoretically derived equation based on ratios of stay to switch responses matching ratios of stay to switch reinforcers. The present experiment compared descriptions by the original stay/switch equation when using and not using a changeover delay. It also compared descriptions by the more theoretical equation with and without a changeover delay. Finally, it compared descriptions of the concurrent performances by these two equations. Rats were trained in 15 conditions on identical concurrent random-interval schedules in each component of a multiple schedule. A COD operated in only one component. There were no consistent differences in the variance accounted for by each equation of concurrent performances whether or not a COD was used. The simpler equation found greater sensitivity to stay than to switch reinforcers. It also found a COD eliminated the influence of switch reinforcers. Because estimates of parameters were more meaningful when using the more theoretical stay/switch equation it is preferred. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2014-08-01

    Wildland fire propagation is studied in the literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternatives to each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay, and it is not zero in an infinite domain, while the level-set method, which is a front tracking technique, generates a sharp function that is not zero inside a compact domain. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random nature and they are extremely important in wildland fire propagation. Consequently, the fire front gets a random character, too; hence, a tracking method for random fronts is needed. In particular, the level-set contour is randomised here according to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterising role that is typical of the level-set approach. The resulting model emerges to be suitable for simulating effects due to turbulent convection, such as fire flank and backing fire, the faster fire spread being because of the actions by hot-air pre-heating and by ember landing, and also due to the fire overcoming a fire-break zone, which is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation, a correction follows for the formula of the rate of spread which is due to the mean jump length of firebrands in the downwind direction for the leeward sector of the fireline contour. The presented study constitutes a proof of concept, and it needs to be subjected to a future validation.

  1. A model of gene expression based on random dynamical systems reveals modularity properties of gene regulatory networks.

    PubMed

    Antoneli, Fernando; Ferreira, Renata C; Briones, Marcelo R S

    2016-06-01

    Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) one may obtain the classical rate equations form the corresponding stochastic version by averaging the dynamic random variables (small noise limit). It is important to emphasize that unlike the deterministic case, where coupling two equations is a trivial matter, coupling two RDS is non-trivial, specially in our case, where the coupling is performed between a state variable of one gene and the switching stochastic process of another gene and, hence, it is not a priori true that the resulting coupled system will satisfy the definition of a random dynamical system. We shall provide the necessary arguments that ensure that our coupling prescription does indeed furnish a coupled regulatory network of random dynamical systems. Finally, the fact that classical rate equations are the small noise limit of our stochastic model ensures that any validation or prediction made on the basis of the classical theory is also a validation or prediction of our model. We illustrate our framework with some simple examples of single-gene system and network motifs. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Anomalous transport in turbulent plasmas and continuous time random walks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balescu, R.

    1995-05-01

    The possibility of a model of anomalous transport problems in a turbulent plasma by a purely stochastic process is investigated. The theory of continuous time random walks (CTRW`s) is briefly reviewed. It is shown that a particular class, called the standard long tail CTRW`s is of special interest for the description of subdiffusive transport. Its evolution is described by a non-Markovian diffusion equation that is constructed in such a way as to yield exact values for all the moments of the density profile. The concept of a CTRW model is compared to an exact solution of a simple test problem:more » transport of charged particles in a fluctuating magnetic field in the limit of infinite perpendicular correlation length. Although the well-known behavior of the mean square displacement proportional to {ital t}{sup 1/2} is easily recovered, the exact density profile cannot be modeled by a CTRW. However, the quasilinear approximation of the kinetic equation has the form of a non-Markovian diffusion equation and can thus be generated by a CTRW.« less

  3. Uncertainty Quantification in Scale-Dependent Models of Flow in Porous Media: SCALE-DEPENDENT UQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, A. M.; Panzeri, M.; Tartakovsky, G. D.

    Equations governing flow and transport in heterogeneous porous media are scale-dependent. We demonstrate that it is possible to identify a support scalemore » $$\\eta^*$$, such that the typically employed approximate formulations of Moment Equations (ME) yield accurate (statistical) moments of a target environmental state variable. Under these circumstances, the ME approach can be used as an alternative to the Monte Carlo (MC) method for Uncertainty Quantification in diverse fields of Earth and environmental sciences. MEs are directly satisfied by the leading moments of the quantities of interest and are defined on the same support scale as the governing stochastic partial differential equations (PDEs). Computable approximations of the otherwise exact MEs can be obtained through perturbation expansion of moments of the state variables in orders of the standard deviation of the random model parameters. As such, their convergence is guaranteed only for the standard deviation smaller than one. We demonstrate our approach in the context of steady-state groundwater flow in a porous medium with a spatially random hydraulic conductivity.« less

  4. On global solutions of the random Hamilton-Jacobi equations and the KPZ problem

    NASA Astrophysics Data System (ADS)

    Bakhtin, Yuri; Khanin, Konstantin

    2018-04-01

    In this paper, we discuss possible qualitative approaches to the problem of KPZ universality. Throughout the paper, our point of view is based on the geometrical and dynamical properties of minimisers and shocks forming interlacing tree-like structures. We believe that the KPZ universality can be explained in terms of statistics of these structures evolving in time. The paper is focussed on the setting of the random Hamilton-Jacobi equations. We formulate several conjectures concerning global solutions and discuss how their properties are connected to the KPZ scalings in dimension 1  +  1. In the case of general viscous Hamilton-Jacobi equations with non-quadratic Hamiltonians, we define generalised directed polymers. We expect that their behaviour is similar to the behaviour of classical directed polymers, and present arguments in favour of this conjecture. We also define a new renormalisation transformation defined in purely geometrical terms and discuss conjectural properties of the corresponding fixed points. Most of our conjectures are widely open, and supported by only partial rigorous results for particular models.

  5. Raney Distributions and Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Liu, Dang-Zheng

    2015-03-01

    Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.

  6. Detecting Intervention Effects in a Cluster-Randomized Design Using Multilevel Structural Equation Modeling for Binary Responses

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Preacher, Kristopher J.; Bottge, Brian A.

    2015-01-01

    Multilevel modeling (MLM) is frequently used to detect group differences, such as an intervention effect in a pre-test--post-test cluster-randomized design. Group differences on the post-test scores are detected by controlling for pre-test scores as a proxy variable for unobserved factors that predict future attributes. The pre-test and post-test…

  7. Perceived Gender Presentation Among Transgender and Gender Diverse Youth: Approaches to Analysis and Associations with Bullying Victimization and Emotional Distress.

    PubMed

    Gower, Amy L; Rider, G Nicole; Coleman, Eli; Brown, Camille; McMorris, Barbara J; Eisenberg, Marla E

    2018-06-19

    As measures of birth-assigned sex, gender identity, and perceived gender presentation are increasingly included in large-scale research studies, data analysis approaches incorporating such measures are needed. Large samples capable of demonstrating variation within the transgender and gender diverse (TGD) community can inform intervention efforts to improve health equity. A population-based sample of TGD youth was used to examine associations between perceived gender presentation, bullying victimization, and emotional distress using two data analysis approaches. Secondary data analysis of the Minnesota Student Survey included 2168 9th and 11th graders who identified as "transgender, genderqueer, genderfluid, or unsure about their gender identity." Youth reported their biological sex, how others perceived their gender presentation, experiences of four forms of bullying victimization, and four measures of emotional distress. Logistic regression and multifactor analysis of variance (ANOVA) were used to compare and contrast two analysis approaches. Logistic regressions indicated that TGD youth perceived as more gender incongruent had higher odds of bullying victimization and emotional distress relative to those perceived as very congruent with their biological sex. Multifactor ANOVAs demonstrated more variable patterns and allowed for comparisons of each perceived presentation group with all other groups, reflecting nuances that exist within TGD youth. Researchers should adopt data analysis strategies that allow for comparisons of all perceived gender presentation categories rather than assigning a reference group. Those working with TGD youth should be particularly attuned to youth perceived as gender incongruent as they may be more likely to experience bullying victimization and emotional distress.

  8. Exploring the interaction among EPHX1, GSTP1, SERPINE2, and TGFB1 contributing to the quantitative traits of chronic obstructive pulmonary disease in Chinese Han population.

    PubMed

    An, Li; Lin, Yingxiang; Yang, Ting; Hua, Lin

    2016-05-18

    Currently, the majority of genetic association studies on chronic obstructive pulmonary disease (COPD) risk focused on identifying the individual effects of single nucleotide polymorphisms (SNPs) as well as their interaction effects on the disease. However, conventional genetic studies often use binary disease status as the primary phenotype, but for COPD, many quantitative traits have the potential correlation with the disease status and closely reflect pathological changes. Here, we genotyped 44 SNPs from four genes (EPHX1, GSTP1, SERPINE2, and TGFB1) in 310 patients and 203 controls which belonged to the Chinese Han population to test the two-way and three-way genetic interactions with COPD-related quantitative traits using recently developed generalized multifactor dimensionality reduction (GMDR) and quantitative multifactor dimensionality reduction (QMDR) algorithms. Based on the 310 patients and the whole samples of 513 subjects, the best gene-gene interactions models were detected for four lung-function-related quantitative traits. For the forced expiratory volume in 1 s (FEV1), the best interaction was seen from EPHX1, SERPINE2, and GSTP1. For FEV1%pre, the forced vital capacity (FVC), and FEV1/FVC, the best interactions were seen from SERPINE2 and TGFB1. The results of this study provide further evidence for the genotype combinations at risk of developing COPD in Chinese Han population and improve the understanding on the genetic etiology of COPD and COPD-related quantitative traits.

  9. Information-Theoretic Metrics for Visualizing Gene-Environment Interactions

    PubMed Central

    Chanda, Pritam ; Zhang, Aidong ; Brazeau, Daniel ; Sucheston, Lara ; Freudenheim, Jo L. ; Ambrosone, Christine ; Ramanathan, Murali 

    2007-01-01

    The purpose of our work was to develop heuristics for visualizing and interpreting gene-environment interactions (GEIs) and to assess the dependence of candidate visualization metrics on biological and study-design factors. Two information-theoretic metrics, the k-way interaction information (KWII) and the total correlation information (TCI), were investigated. The effectiveness of the KWII and TCI to detect GEIs in a diverse range of simulated data sets and a Crohn disease data set was assessed. The sensitivity of the KWII and TCI spectra to biological and study-design variables was determined. Head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and the pedigree disequilibrium test (PDT) methods were obtained. The KWII and TCI spectra, which are graphical summaries of the KWII and TCI for each subset of environmental and genotype variables, were found to detect each known GEI in the simulated data sets. The patterns in the KWII and TCI spectra were informative for factors such as case-control misassignment, locus heterogeneity, allele frequencies, and linkage disequilibrium. The KWII and TCI spectra were found to have excellent sensitivity for identifying the key disease-associated genetic variations in the Crohn disease data set. In head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and PDT methods, the results from visual interpretation of the KWII and TCI spectra performed satisfactorily. The KWII and TCI are promising metrics for visualizing GEIs. They are capable of detecting interactions among numerous single-nucleotide polymorphisms and environmental variables for a diverse range of GEI models. PMID:17924337

  10. Random-order fractional bistable system and its stochastic resonance

    NASA Astrophysics Data System (ADS)

    Gao, Shilong; Zhang, Li; Liu, Hui; Kan, Bixia

    2017-01-01

    In this paper, the diffusion motion of Brownian particles in a viscous liquid suffering from stochastic fluctuations of the external environment is modeled as a random-order fractional bistable equation, and as a typical nonlinear dynamic behavior, the stochastic resonance phenomena in this system are investigated. At first, the derivation process of the random-order fractional bistable system is given. In particular, the random-power-law memory is deeply discussed to obtain the physical interpretation of the random-order fractional derivative. Secondly, the stochastic resonance evoked by random-order and external periodic force is mainly studied by numerical simulation. In particular, the frequency shifting phenomena of the periodical output are observed in SR induced by the excitation of the random order. Finally, the stochastic resonance of the system under the double stochastic excitations of the random order and the internal color noise is also investigated.

  11. Feynman-Kac formula for stochastic hybrid systems.

    PubMed

    Bressloff, Paul C

    2017-01-01

    We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.

  12. Microscopic Interpretation and Generalization of the Bloch-Torrey Equation for Diffusion Magnetic Resonance

    PubMed Central

    Seroussi, Inbar; Grebenkov, Denis S.; Pasternak, Ofer; Sochen, Nir

    2017-01-01

    In order to bridge microscopic molecular motion with macroscopic diffusion MR signal in complex structures, we propose a general stochastic model for molecular motion in a magnetic field. The Fokker-Planck equation of this model governs the probability density function describing the diffusion-magnetization propagator. From the propagator we derive a generalized version of the Bloch-Torrey equation and the relation to the random phase approach. This derivation does not require assumptions such as a spatially constant diffusion coefficient, or ad-hoc selection of a propagator. In particular, the boundary conditions that implicitly incorporate the microstructure into the diffusion MR signal can now be included explicitly through a spatially varying diffusion coefficient. While our generalization is reduced to the conventional Bloch-Torrey equation for piecewise constant diffusion coefficients, it also predicts scenarios in which an additional term to the equation is required to fully describe the MR signal. PMID:28242566

  13. Exact solution of the hidden Markov processes.

    PubMed

    Saakian, David B

    2017-11-01

    We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M-1.

  14. Exact solution of the hidden Markov processes

    NASA Astrophysics Data System (ADS)

    Saakian, David B.

    2017-11-01

    We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M -1 .

  15. Exact Asymptotics of the Freezing Transition of a Logarithmically Correlated Random Energy Model

    NASA Astrophysics Data System (ADS)

    Webb, Christian

    2011-12-01

    We consider a logarithmically correlated random energy model, namely a model for directed polymers on a Cayley tree, which was introduced by Derrida and Spohn. We prove asymptotic properties of a generating function of the partition function of the model by studying a discrete time analogy of the KPP-equation—thus translating Bramson's work on the KPP-equation into a discrete time case. We also discuss connections to extreme value statistics of a branching random walk and a rescaled multiplicative cascade measure beyond the critical point.

  16. Fokker-Planck equation of the reduced Wigner function associated to an Ohmic quantum Langevin dynamics

    NASA Astrophysics Data System (ADS)

    Colmenares, Pedro J.

    2018-05-01

    This article has to do with the derivation and solution of the Fokker-Planck equation associated to the momentum-integrated Wigner function of a particle subjected to a harmonic external field in contact with an ohmic thermal bath of quantum harmonic oscillators. The strategy employed is a simplified version of the phenomenological approach of Schramm, Jung, and Grabert of interpreting the operators as c numbers to derive the quantum master equation arising from a twofold transformation of the Wigner function of the entire phase space. The statistical properties of the random noise comes from the integral functional theory of Grabert, Schramm, and Ingold. By means of a single Wigner transformation, a simpler equation than that mentioned before is found. The Wigner function reproduces the known results of the classical limit. This allowed us to rewrite the underdamped classical Langevin equation as a first-order stochastic differential equation with time-dependent drift and diffusion terms.

  17. Stochastic dynamics of time correlation in complex systems with discrete time

    NASA Astrophysics Data System (ADS)

    Yulmetyev, Renat; Hänggi, Peter; Gafarov, Fail

    2000-11-01

    In this paper we present the concept of description of random processes in complex systems with discrete time. It involves the description of kinetics of discrete processes by means of the chain of finite-difference non-Markov equations for time correlation functions (TCFs). We have introduced the dynamic (time dependent) information Shannon entropy Si(t) where i=0,1,2,3,..., as an information measure of stochastic dynamics of time correlation (i=0) and time memory (i=1,2,3,...). The set of functions Si(t) constitute the quantitative measure of time correlation disorder (i=0) and time memory disorder (i=1,2,3,...) in complex system. The theory developed started from the careful analysis of time correlation involving dynamics of vectors set of various chaotic states. We examine two stochastic processes involving the creation and annihilation of time correlation (or time memory) in details. We carry out the analysis of vectors' dynamics employing finite-difference equations for random variables and the evolution operator describing their natural motion. The existence of TCF results in the construction of the set of projection operators by the usage of scalar product operation. Harnessing the infinite set of orthogonal dynamic random variables on a basis of Gram-Shmidt orthogonalization procedure tends to creation of infinite chain of finite-difference non-Markov kinetic equations for discrete TCFs and memory functions (MFs). The solution of the equations above thereof brings to the recurrence relations between the TCF and MF of senior and junior orders. This offers new opportunities for detecting the frequency spectra of power of entropy function Si(t) for time correlation (i=0) and time memory (i=1,2,3,...). The results obtained offer considerable scope for attack on stochastic dynamics of discrete random processes in a complex systems. Application of this technique on the analysis of stochastic dynamics of RR intervals from human ECG's shows convincing evidence for a non-Markovian phenomemena associated with a peculiarities in short- and long-range scaling. This method may be of use in distinguishing healthy from pathologic data sets based in differences in these non-Markovian properties.

  18. A Theoretical Understanding of Circular Polarization Memory in Random Media

    NASA Astrophysics Data System (ADS)

    Dark, Julia

    Radiative transport theory describes the propagation of light in random media that absorb, scatter, and emit radiation. To describe the propagation of light, the full polarization state is quantified using the Stokes parameters. For the sake of mathematical convenience, the polarization state of light is often neglected leading to the scalar radiative transport equation for the intensity only. For scalar transport theory, there is a well-established body of literature on numerical and analytic approximations to the radiative transport equation. We extend the scalar theory to the vector radiative transport equation (vRTE). In particular, we are interested in the theoretical basis for a phenomena called circular polarization memory. Circular polarization memory is the physical phenomena whereby circular polarization retains its ellipticity and handedness when propagating in random media. This is in contrast to the propagation of linear polarization in random media, which depolarizes at a faster rate, and specular reflection of circular polarization, whereby the circular polarization handedness flips. We investigate two limits that are of known interest in the phenomena of circular polarization memory. The first limit we investigate is that of forward-peaked scattering, i.e. the limit where most scattering events occur in the forward or near-forward directions. The second limit we consider is that of strong scattering and weak absorption. In the forward-peaked scattering limit we approximate the vRTE by a system of partial differential equations motivated by the scalar Fokker-Planck approximation. We call the leading order approximation the vector Fokker-Planck approximation. The vector Fokker Planck approximation predicts that strongly forward-peaked media exhibit circular polarization memory where the strength of the effect can be calculated from the expansion of the scattering matrix in special functions. In addition, we find in this limit that total intensity, linear polarization, and circular polarization decouple. From this result we conclude, that in the Fokker-Planck limit the scalar approximation is an appropriate leading order approximation. In the strong scattering and weak absorbing limit the vector radiative transport equation can be analyzed using boundary layer theory. In this case, the problem of light scattering in an optically thick medium is reduced to a 1D vRTE near the boundary and a 3D diffusion equation in the interior. We develop and implement a numerical solver for the boundary layer problem by using a discrete ordinate solver in the boundary layer and a spectral method to solve the diffusion approximation in the interior. We implement the method in Fortran 95 with external dependencies on BLAS, LAPACK, and FFTW. By analyzing the spectrum of the discretized vRTE in the boundary layer, we are able to predict the presence of circular polarization memory in a given medium.

  19. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson–Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Kausik, E-mail: kausik.chatterjee@aggiemail.usu.edu; Center for Atmospheric and Space Sciences, Utah State University, Logan, UT 84322; Roadcap, John R., E-mail: john.roadcap@us.af.mil

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson–Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals ofmore » the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.« less

  20. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson-Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  1. Stochastic Galerkin methods for the steady-state Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu

    2016-07-01

    We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less

  2. A 1D radiative transfer benchmark with polarization via doubling and adding

    NASA Astrophysics Data System (ADS)

    Ganapol, B. D.

    2017-11-01

    Highly precise numerical solutions to the radiative transfer equation with polarization present a special challenge. Here, we establish a precise numerical solution to the radiative transfer equation with combined Rayleigh and isotropic scattering in a 1D-slab medium with simple polarization. The 2-Stokes vector solution for the fully discretized radiative transfer equation in space and direction derives from the method of doubling and adding enhanced through convergence acceleration. Updates to benchmark solutions found in the literature to seven places for reflectance and transmittance as well as for angular flux follow. Finally, we conclude with the numerical solution in a partially randomly absorbing heterogeneous medium.

  3. Stochastic Galerkin methods for the steady-state Navier–Stokes equations

    DOE PAGES

    Sousedík, Bedřich; Elman, Howard C.

    2016-04-12

    We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less

  4. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  5. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  6. Finite Element Analysis of the Random Response Suppression of Composite Panels at Elevated Temperatures using Shape Memory Alloy Fibers

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Zhong, Z. W.; Mei, Chuh

    1994-01-01

    A feasibility study on the use of shape memory alloys (SMA) for suppression of the random response of composite panels due to acoustic loads at elevated temperatures is presented. The constitutive relations for a composite lamina with embedded SMA fibers are developed. The finite element governing equations and the solution procedures for a composite plate subjected to combined acoustic and thermal loads are presented. Solutions include: 1) Critical buckling temperature; 2) Flat panel random response; 3) Thermal postbuckling deflection; 4) Random response of a thermally buckled panel. The preliminary results demonstrate that the SMA fibers can completely eliminate the thermal postbuckling deflection and significantly reduce the random response at elevated temperatures.

  7. Propagation of mechanical waves through a stochastic medium with spherical symmetry

    NASA Astrophysics Data System (ADS)

    Avendaño, Carlos G.; Reyes, J. Adrián

    2018-01-01

    We theoretically analyze the propagation of outgoing mechanical waves through an infinite isotropic elastic medium possessing spherical symmetry whose Lamé coefficients and density are spatial random functions characterized by well-defined statistical parameters. We derive the differential equation that governs the average displacement for a system whose properties depend on the radial coordinate. We show that such an equation is an extended version of the well-known Bessel differential equation whose perturbative additional terms contain coefficients that depend directly on the squared noise intensities and the autocorrelation lengths in an exponential decay fashion. We numerically solve the second order differential equation for several values of noise intensities and autocorrelation lengths and compare the corresponding displacement profiles with that of the exact analytic solution for the case of absent inhomogeneities.

  8. A strictly Markovian expansion for plasma turbulence theory

    NASA Technical Reports Server (NTRS)

    Jones, F. C.

    1978-01-01

    The collision operator that appears in the equation of motion for a particle distribution function that has been averaged over an ensemble of random Hamiltonians is non-Markovian. It is non-Markovian in that it involves a propagated integral over the past history of the ensemble averaged distribution function. All formal expansions of this nonlinear collision operator to date preserve this non-Markovian character term by term yielding an integro-differential equation that must be converted to a diffusion equation by an additional approximation. In this note we derive an expansion of the collision operator that is strictly Markovian to any finite order and yields a diffusion equation as the lowest non-trivial order. The validity of this expansion is seen to be the same as that of the standard quasi-linear expansion.

  9. Particle connectedness and cluster formation in sequential depositions of particles: integral-equation theory.

    PubMed

    Danwanichakul, Panu; Glandt, Eduardo D

    2004-11-15

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  10. Particle connectedness and cluster formation in sequential depositions of particles: Integral-equation theory

    NASA Astrophysics Data System (ADS)

    Danwanichakul, Panu; Glandt, Eduardo D.

    2004-11-01

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  11. Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC): Towards a New Adjusted Radiosonde Dataset

    NASA Astrophysics Data System (ADS)

    Free, M. P.; Angell, J. K.; Durre, I.; Klein, S.; Lanzante, J.; Lawrimore, J.; Peterson, T.; Seidel, D.

    2002-05-01

    The objective of NOAA's RATPAC project is to develop climate-quality global, hemispheric and zonal upper-air temperature time series from the NCDC radiosonde database. Lanzante, Klein and Seidel (LKS) have produced an 87-station adjusted radiosonde dataset using a multifactor expert decision approach. Our goal is to extend this dataset spatially and temporally and to provide a method to update it routinely at NCDC. Since the LKS adjustment method is too labor-intensive for these purposes, we are investigating a first-difference method (Peterson et al., 1998) and an automated version of the LKS method. The first difference method (FD) can be used to combine large numbers of time series into spatial means, but also introduces a random error in the resulting large-scale averages. If the portions of the time series with suspect continuity are withheld from the calculations, it has the potential to reconstruct the real variability without the effects of the discontinuities. However, tests of FD on unadjusted radiosonde data and on reanalysis temperature data suggest that it must be used with caution when the number of stations is low and the number of data gaps is high. Because of these problems with the first difference approach, we are also considering an automated version of the LKS adjustment method using statistical change points, day-night temperature difference series, relationships between changes in adjacent atmospheric levels, and station histories to identify inhomogeneities in the temperature data.

  12. The relationship between transformational leadership and work engagement in governmental hospitals nurses: a survey study.

    PubMed

    Hayati, Davood; Charkhabi, Morteza; Naami, Abdolzahra

    2014-01-14

    The aim of this study was to determine the effects of transformational leadership and its components on work engagement among hospital nurses. There are a few set of researches that have focused on the effects of transformational leadership on work engagement in nurses. A descriptive, correlational, cross-sectional design was used. In this study, 240 nurses have been chosen by stratified random sampling method which filled related self-reported scales include multifactor leadership questionnaire (MLQ) and work engagement scale. Data analysis has been exerted according to the statistical method of simple and multiple correlation coefficients. Findings indicated that the effect of this type of leadership on work engagement and its facets is positive and significant. In addition, the research illustrates that transformational leaders transfer their enthusiasm and high power to their subordinates by the way of modeling. This manner can increase the power as a component of work engagement in workers. Idealized influence among these leaders can result in forming a specific belief among employees toward those leaders and leaders can easily transmit their inspirational motivation to them. Consequently, it leads to make a positive vision by which, and by setting high standards, challenges the employees and establishes zeal along with optimism for attaining success in works. regarding to the results we will expand leadership and work engagement literature in hospital nurses. Also, we conclude with theoretical and practical implications and propose a clear horizon for future researches.

  13. Is tenure justified? An experimental study of faculty beliefs about tenure, promotion, and academic freedom.

    PubMed

    Ceci, Stephen J; Williams, Wendy M; Mueller-Johnson, Katrin

    2006-12-01

    The behavioral sciences have come under attack for writings and speech that affront sensitivities. At such times, academic freedom and tenure are invoked to forestall efforts to censure and terminate jobs. We review the history and controversy surrounding academic freedom and tenure, and explore their meaning across different fields, at different institutions, and at different ranks. In a multifactoral experimental survey, 1,004 randomly selected faculty members from top-ranked institutions were asked how colleagues would typically respond when confronted with dilemmas concerning teaching, research, and wrong-doing. Full professors were perceived as being more likely to insist on having the academic freedom to teach unpopular courses, research controversial topics, and whistle-blow wrong-doing than were lower-ranked professors (even associate professors with tenure). Everyone thought that others were more likely to exercise academic freedom than they themselves were, and that promotion to full professor was a better predictor of who would exercise academic freedom than was the awarding of tenure. Few differences emerged related either to gender or type of institution, and behavioral scientists' beliefs were similar to scholars from other fields. In addition, no support was found for glib celebrations of tenure's sanctification of broadly defined academic freedoms. These findings challenge the assumption that tenure can be justified on the basis of fostering academic freedom, suggesting the need for a re-examination of the philosophical foundation and practical implications of tenure in today's academy.

  14. [Somatic complaints, emotional awareness and maladjustment in schoolchildren].

    PubMed

    Ordóñez, A; Maganto, C; González, R

    2015-05-01

    Somatic complaints are common in childhood. Research has shown their relationship with emotional awareness and maladjustment. The study had three objectives: 1) to analyze the prevalence of somatic complaints; 2) To explore the relationships between the variables evaluated: somatic complaints, differentiating emotions, verbal sharing of emotions, not hiding emotions, body awareness, attending to others' emotions, analysis of emotions, and personal, social, family, and school maladjustments; and 3) To identify predictors of somatic complaints. The study included a total of 1,134 randomly selected schoolchildren of both sexes between 10-12 years old (M=10.99; SD=0.88). The Somatic Complaint List, Emotional Awareness Questionnaire, and Self-reported Multifactor Test of Childhood Adaptation were used to gather information. The results showed that the prevalence of somatic complaints was 90.2%, with fatigue, headache and stomachache being the most frequently. Dizziness and headache were more common in girls, and the frequency of complaints decreases with age. Somatic complaints are negatively related to emotional awareness, and positively related to maladjustment. The variables that contribute the most to the prediction of somatic complaints are personal maladjustment (25.1%) and differentiating emotions (2.5%). The study shows that personal maladjustment is the best predictor of somatic complaints; the more emotional awareness and better adapted the child, the fewer somatic complaints they lodge. Childhood is a stage with significant physical discomfort. Copyright © 2014 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.

  15. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  16. Generalization of one-dimensional solute transport: A stochastic-convective flow conceptualization

    NASA Astrophysics Data System (ADS)

    Simmons, C. S.

    1986-04-01

    A stochastic-convective representation of one-dimensional solute transport is derived. It is shown to conceptually encompass solutions of the conventional convection-dispersion equation. This stochastic approach, however, does not rely on the assumption that dispersive flux satisfies Fick's diffusion law. Observable values of solute concentration and flux, which together satisfy a conservation equation, are expressed as expectations over a flow velocity ensemble, representing the inherent random processess that govern dispersion. Solute concentration is determined by a Lagrangian pdf for random spatial displacements, while flux is determined by an equivalent Eulerian pdf for random travel times. A condition for such equivalence is derived for steady nonuniform flow, and it is proven that both Lagrangian and Eulerian pdfs are required to account for specified initial and boundary conditions on a global scale. Furthermore, simplified modeling of transport is justified by proving that an ensemble of effectively constant velocities always exists that constitutes an equivalent representation. An example of how a two-dimensional transport problem can be reduced to a single-dimensional stochastic viewpoint is also presented to further clarify concepts.

  17. Sample size determination for GEE analyses of stepped wedge cluster randomized trials.

    PubMed

    Li, Fan; Turner, Elizabeth L; Preisser, John S

    2018-06-19

    In stepped wedge cluster randomized trials, intact clusters of individuals switch from control to intervention from a randomly assigned period onwards. Such trials are becoming increasingly popular in health services research. When a closed cohort is recruited from each cluster for longitudinal follow-up, proper sample size calculation should account for three distinct types of intraclass correlations: the within-period, the inter-period, and the within-individual correlations. Setting the latter two correlation parameters to be equal accommodates cross-sectional designs. We propose sample size procedures for continuous and binary responses within the framework of generalized estimating equations that employ a block exchangeable within-cluster correlation structure defined from the distinct correlation types. For continuous responses, we show that the intraclass correlations affect power only through two eigenvalues of the correlation matrix. We demonstrate that analytical power agrees well with simulated power for as few as eight clusters, when data are analyzed using bias-corrected estimating equations for the correlation parameters concurrently with a bias-corrected sandwich variance estimator. © 2018, The International Biometric Society.

  18. Response of space shuttle insulation panels to acoustic noise pressure

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1976-01-01

    The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.

  19. Distributed Beamforming in a Swarm UAV Network

    DTIC Science & Technology

    2008-03-01

    indicates random ( noncoherent ) transmission. For coherent transmission, there are no phase differences, so 02 ≅∆ and Equation (2.32) yields...so ∞→∆2 is assumed and Equation (2.32) yields ( ) N R GGPP bttL 22 2 4π λ = (2.34) This result occurs for N noncoherent transmitters, such as... Noncoherent Coherent Figure 7. Monte Carlo simulation results with 100 trials at each value of N, RMS error = 1.0919 degrees. Figure 7 shows a

  20. Aircraft Airframe Cost Estimation Using a Random Coefficients Model

    DTIC Science & Technology

    1979-12-01

    approach will also be used here. 2 Model Formulation Several different types of equations could be used for the basic form of the CER, such as linear ...5) Marcotte developed several CER’s for fighter aircraft airframes using the log- linear model . A plot of the residuals from the CER for recurring...of the natural logarithm. Ordinary Least Squares The ordinary least squares procedure starts with the equation for the general linear model . The

  1. An exact solution of solute transport by one-dimensional random velocity fields

    USGS Publications Warehouse

    Cvetkovic, V.D.; Dagan, G.; Shapiro, A.M.

    1991-01-01

    The problem of one-dimensional transport of passive solute by a random steady velocity field is investigated. This problem is representative of solute movement in porous media, for example, in vertical flow through a horizontally stratified formation of variable porosity with a constant flux at the soil surface. Relating moments of particle travel time and displacement, exact expressions for the advection and dispersion coefficients in the Focker-Planck equation are compared with the perturbation results for large distances. The first- and second-order approximations for the dispersion coefficient are robust for a lognormal velocity field. The mean Lagrangian velocity is the harmonic mean of the Eulerian velocity for large distances. This is an artifact of one-dimensional flow where the continuity equation provides for a divergence free fluid flux, rather than a divergence free fluid velocity. ?? 1991 Springer-Verlag.

  2. DG-IMEX Stochastic Galerkin Schemes for Linear Transport Equation with Random Inputs and Diffusive Scalings

    DOE PAGES

    Chen, Zheng; Liu, Liu; Mu, Lin

    2017-05-03

    In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less

  3. Effect of random surface inhomogeneities on spectral properties of dielectric-disk microresonators: theory and modeling at millimeter wave range.

    PubMed

    Ganapolskii, E M; Eremenko, Z E; Tarasov, Yu V

    2009-04-01

    The influence of random axially homogeneous surface roughness on spectral properties of dielectric resonators of circular disk form is studied both theoretically and experimentally. To solve the equations governing the dynamics of electromagnetic fields, the method of eigenmode separation is applied previously developed with reference to inhomogeneous systems subject to arbitrary external static potential. We prove theoretically that it is the gradient mechanism of wave-surface scattering that is highly responsible for nondissipative loss in the resonator. The influence of side-boundary inhomogeneities on the resonator spectrum is shown to be described in terms of effective renormalization of mode wave numbers jointly with azimuth indices in the characteristic equation. To study experimentally the effect of inhomogeneities on the resonator spectrum, the method of modeling in the millimeter wave range is applied. As a model object, we use a dielectric disk resonator (DDR) fitted with external inhomogeneities randomly arranged at its side boundary. Experimental results show good agreement with theoretical predictions as regards the predominance of the gradient scattering mechanism. It is shown theoretically and confirmed in the experiment that TM oscillations in the DDR are less affected by surface inhomogeneities than TE oscillations with the same azimuth indices. The DDR model chosen for our study as well as characteristic equations obtained thereupon enable one to calculate both the eigenfrequencies and the Q factors of resonance spectral lines to fairly good accuracy. The results of calculations agree well with obtained experimental data.

  4. NIMROD: a program for inference via a normal approximation of the posterior in models with random effects based on ordinary differential equations.

    PubMed

    Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe

    2013-08-01

    Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Effects of correcting missing daily feed intake values on the genetic parameters and estimated breeding values for feeding traits in pigs.

    PubMed

    Ito, Tetsuya; Fukawa, Kazuo; Kamikawa, Mai; Nikaidou, Satoshi; Taniguchi, Masaaki; Arakawa, Aisaku; Tanaka, Genki; Mikawa, Satoshi; Furukawa, Tsutomu; Hirose, Kensuke

    2018-01-01

    Daily feed intake (DFI) is an important consideration for improving feed efficiency, but measurements using electronic feeder systems contain many missing and incorrect values. Therefore, we evaluated three methods for correcting missing DFI data (quadratic, orthogonal polynomial, and locally weighted (Loess) regression equations) and assessed the effects of these missing values on the genetic parameters and the estimated breeding values (EBV) for feeding traits. DFI records were obtained from 1622 Duroc pigs, comprising 902 individuals without missing DFI and 720 individuals with missing DFI. The Loess equation was the most suitable method for correcting the missing DFI values in 5-50% randomly deleted datasets among the three equations. Both variance components and heritability for the average DFI (ADFI) did not change because of the missing DFI proportion and Loess correction. In terms of rank correlation and information criteria, Loess correction improved the accuracy of EBV for ADFI compared to randomly deleted cases. These findings indicate that the Loess equation is useful for correcting missing DFI values for individual pigs and that the correction of missing DFI values could be effective for the estimation of breeding values and genetic improvement using EBV for feeding traits. © 2017 The Authors. Animal Science Journal published by John Wiley & Sons Australia, Ltd on behalf of Japanese Society of Animal Science.

  6. Filling of a Poisson trap by a population of random intermittent searchers.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2012-03-01

    We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→∞, in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f(n)(t). The latter is determined by the integrated Poisson rate μ(t)=∫(0)(t)λ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical results for the mean-field model with Monte Carlo simulations for finite N. We thus determine how the mean first passage time (MFPT) for filling the target depends on N and n.

  7. Comparative Clinical Study of Conventional Dental Implants and Mini Dental Implants for Mandibular Overdentures: A Randomized Clinical Trial.

    PubMed

    Aunmeungtong, Weerapan; Kumchai, Thongnard; Strietzel, Frank P; Reichart, Peter A; Khongkhunthian, Pathawee

    2017-04-01

    Dental implant-retained overdentures have been chosen as the treatment of choice for complete mandibular removable dentures. Dental implants, such as mini dental implants, and components for retaining overdentures, are commercially available. However, comparative clinical studies comparing mini dental implants and conventional dental implants using different attachment for implant-retained overdentures have not been well documented. To compare the clinical outcomes of using two mini dental implants with Equator ® attachments, four mini dental implants with Equator attachments, or two conventional dental implants with ball attachments, by means of a randomized clinical trial. Sixty patients received implant-retained mandibular overdentures in the interforaminal region. The patients were divided into three groups. In Groups 1 and 2, two and four mini dental implants, respectively, were placed and immediately loaded by overdentures, using Equator ® attachments. In Group 3, conventional implants were placed. After osseointegration, the implants were loaded by overdentures, using ball attachments. The study distribution was randomized and double-blinded. Outcome measures included changes in radiological peri-implant bone level from surgery to 12 months postinsertion, prosthodontic complications and patient satisfaction. The cumulative survival rate in the three clinical groups after one year was 100%. There was no significant difference (p < 0.05) in clinical results regarding the number (two or four) of mini dental implants with Equator attachments. However, there was a significant difference in marginal bone loss and patient satisfaction between those receiving mini dental implants with Equator attachments and conventional dental implants with ball attachments. The marginal bone resorption in Group 3 was significantly higher than in Groups 1 and 2 (p < 0.05); there were no significant differences between Groups 1 and 2. There was no significant difference in patient satisfaction between Groups 1 and 2 but it was significantly higher than that in Group3 (p < 0.05). Two and four mini dental implants can be immediately used successfully for retaining lower complete dentures, as shown after a 1-year follow up. © 2016 Wiley Periodicals, Inc.

  8. [New International Classification of Chronic Pancreatitis (M-ANNHEIM multifactor classification system, 2007): principles, merits, and demerits].

    PubMed

    Tsimmerman, Ia S

    2008-01-01

    The new International Classification of Chronic Pancreatitis (designated as M-ANNHEIM) proposed by a group of German specialists in late 2007 is reviewed. All its sections are subjected to analysis (risk group categories, clinical stages and phases, variants of clinical course, diagnostic criteria for "established" and "suspected" pancreatitis, instrumental methods and functional tests used in the diagnosis, evaluation of the severity of the disease using a scoring system, stages of elimination of pain syndrome). The new classification is compared with the earlier classification proposed by the author. Its merits and demerits are discussed.

  9. [Typology and systematization of residual mental disorders in alcohol dependence].

    PubMed

    Klimenko, T V; Agafonova, S S

    2007-01-01

    The study of 85 patients with alcohol dependence appointed to forensic psychiatric expertise in the Serbsky research center of social and forensic psychiatry revealed the manifestation of polymorphic psychiatric and behavioral disorders (ICD-10 diagnosis F10.7--residual and late-onset psychotic disorders) after stopping the intoxication, withdrawal and post-withdralwal disorders. Taking into account the multifactor etiology of psychiatric disorders which are observed after ending of the direct effect of alcohol, a possibility of including other ICD-10 items to extend their diagnostics and thus provide the more accurate clinical verification of these states, is discussed.

  10. Crop status evaluations and yield predictions

    NASA Technical Reports Server (NTRS)

    Haun, J. R.

    1976-01-01

    One phase of the large area crop inventory project is presented. Wheat yield models based on the input of environmental variables potentially obtainable through the use of space remote sensing were developed and demonstrated. By the use of a unique method for visually qualifying daily plant development and subsequent multifactor computer analyses, it was possible to develop practical models for predicting crop development and yield. Development of wheat yield prediction models was based on the discovery that morphological changes in plants are detected and quantified on a daily basis, and that this change during a portion of the season was proportional to yield.

  11. Characteristics of group networks in the KOSPI and the KOSDAQ

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Ko, Jeung-Su; Yi, Myunggi

    2012-02-01

    We investigate the main feature of group networks in the KOSPI and KOSDAQ of Korean financial markets and analyze daily cross-correlations between price fluctuations for the 5-year time period from 2006 to 2010. We discuss the stabilities by undressing the market-wide effect using the Markowitz multi-factor model and the network-based approach. In particular we ascertain the explicit list of significant firms in the few largest eigenvectors from the undressed correlation matrix. Finally, we show the structure of group correlation by applying a network-based approach. In addition, the relation between market capitalizations and businesses is examined.

  12. Logistic regression trees for initial selection of interesting loci in case-control studies

    PubMed Central

    Nickolov, Radoslav Z; Milanov, Valentin B

    2007-01-01

    Modern genetic epidemiology faces the challenge of dealing with hundreds of thousands of genetic markers. The selection of a small initial subset of interesting markers for further investigation can greatly facilitate genetic studies. In this contribution we suggest the use of a logistic regression tree algorithm known as logistic tree with unbiased selection. Using the simulated data provided for Genetic Analysis Workshop 15, we show how this algorithm, with incorporation of multifactor dimensionality reduction method, can reduce an initial large pool of markers to a small set that includes the interesting markers with high probability. PMID:18466557

  13. Building and verifying a severity prediction model of acute pancreatitis (AP) based on BISAP, MEWS and routine test indexes.

    PubMed

    Ye, Jiang-Feng; Zhao, Yu-Xin; Ju, Jian; Wang, Wei

    2017-10-01

    To discuss the value of the Bedside Index for Severity in Acute Pancreatitis (BISAP), Modified Early Warning Score (MEWS), serum Ca2+, similarly hereinafter, and red cell distribution width (RDW) for predicting the severity grade of acute pancreatitis and to develop and verify a more accurate scoring system to predict the severity of AP. In 302 patients with AP, we calculated BISAP and MEWS scores and conducted regression analyses on the relationships of BISAP scoring, RDW, MEWS, and serum Ca2+ with the severity of AP using single-factor logistics. The variables with statistical significance in the single-factor logistic regression were used in a multi-factor logistic regression model; forward stepwise regression was used to screen variables and build a multi-factor prediction model. A receiver operating characteristic curve (ROC curve) was constructed, and the significance of multi- and single-factor prediction models in predicting the severity of AP using the area under the ROC curve (AUC) was evaluated. The internal validity of the model was verified through bootstrapping. Among 302 patients with AP, 209 had mild acute pancreatitis (MAP) and 93 had severe acute pancreatitis (SAP). According to single-factor logistic regression analysis, we found that BISAP, MEWS and serum Ca2+ are prediction indexes of the severity of AP (P-value<0.001), whereas RDW is not a prediction index of AP severity (P-value>0.05). The multi-factor logistic regression analysis showed that BISAP and serum Ca2+ are independent prediction indexes of AP severity (P-value<0.001), and MEWS is not an independent prediction index of AP severity (P-value>0.05); BISAP is negatively related to serum Ca2+ (r=-0.330, P-value<0.001). The constructed model is as follows: ln()=7.306+1.151*BISAP-4.516*serum Ca2+. The predictive ability of each model for SAP follows the order of the combined BISAP and serum Ca2+ prediction model>Ca2+>BISAP. There is no statistical significance for the predictive ability of BISAP and serum Ca2+ (P-value>0.05); however, there is remarkable statistical significance for the predictive ability using the newly built prediction model as well as BISAP and serum Ca2+ individually (P-value<0.01). Verification of the internal validity of the models by bootstrapping is favorable. BISAP and serum Ca2+ have high predictive value for the severity of AP. However, the model built by combining BISAP and serum Ca2+ is remarkably superior to those of BISAP and serum Ca2+ individually. Furthermore, this model is simple, practical and appropriate for clinical use. Copyright © 2016. Published by Elsevier Masson SAS.

  14. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  15. [Realization of design regarding experimental research in the clinical real-world research].

    PubMed

    He, Q; Shi, J P

    2018-04-10

    Real world study (RWS), a further verification and supplement for explanatory randomized controlled trial to evaluate the effectiveness of intervention measures in real clinical environment, has increasingly become the focus in the field of research on medical and health care services. However, some people mistakenly equate real world study with observational research, and argue that intervention and randomization cannot be carried out in real world study. In fact, both observational and experimental design are the basic designs in real world study, while the latter usually refers to pragmatic randomized controlled trial and registry-based randomized controlled trial. Other nonrandomized controlled and adaptive designs can also be adopted in the RWS.

  16. Optoenergy storage and random walks assisted broadband amplification in Er3+-doped (Pb,La)(Zr,Ti)O3 disordered ceramics.

    PubMed

    Xu, Long; Zhao, Hua; Xu, Caixia; Zhang, Siqi; Zou, Yingyin K; Zhang, Jingwen

    2014-02-01

    A broadband optical amplification was observed and investigated in Er3+-doped electrostrictive ceramics of lanthanum-modified lead zirconate titanate under a corona atmosphere. The ceramic structure change caused by UV light, electric field, and random walks originated from the diffusive process in intrinsically disordered materials may all contribute to the optical amplification and the associated energy storage. Discussion based on optical energy storage and diffusive equations was given to explain the findings. Those experiments performed made it possible to study random walks and optical amplification in transparent ceramics materials.

  17. Review of Recent Methodological Developments in Group-Randomized Trials: Part 2-Analysis.

    PubMed

    Turner, Elizabeth L; Prague, Melanie; Gallis, John A; Li, Fan; Murray, David M

    2017-07-01

    In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have updated that review with developments in analysis of the past 13 years, with a companion article to focus on developments in design. We discuss developments in the topics of the earlier review (e.g., methods for parallel-arm GRTs, individually randomized group-treatment trials, and missing data) and in new topics, including methods to account for multiple-level clustering and alternative estimation methods (e.g., augmented generalized estimating equations, targeted maximum likelihood, and quadratic inference functions). In addition, we describe developments in analysis of alternative group designs (including stepped-wedge GRTs, network-randomized trials, and pseudocluster randomized trials), which require clustering to be accounted for in their design and analysis.

  18. Relativistic diffusive motion in random electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Haba, Z.

    2011-08-01

    We show that the relativistic dynamics in a Gaussian random electromagnetic field can be approximated by the relativistic diffusion of Schay and Dudley. Lorentz invariant dynamics in the proper time leads to the diffusion in the proper time. The dynamics in the laboratory time gives the diffusive transport equation corresponding to the Jüttner equilibrium at the inverse temperature β-1 = mc2. The diffusion constant is expressed by the field strength correlation function (Kubo's formula).

  19. Analytical and Experimental Random Vibration of Nonlinear Aeroelastic Structures.

    DTIC Science & Technology

    1987-01-28

    firstorder differential equations. In view of the system complexi- ty an attempt s made to close the infinite hierarchy by using a Gaussian scheme. This sc...year of this project-. When the first normal mode is externally excited by a band-limited random excitation, the system mean square response is found...governed mainly by the internal detuning parameter and the system damping ratios. The results are completely different when the second normal mode is

  20. Computational Sciences.

    DTIC Science & Technology

    1987-11-01

    III. - 7 1 11 1*25 4 11 - IN, I 61I’. UNCLASSIFIED MASTER COPY - FOR REPRODUCTION PURPOSES ) C . AD-A 190 ’PORT DOCUMENTATION PAGE ~~ 190 826 lb...E uations, University of Alabama, Birmingham, *AL.-7 N. Medhin, M. Sambandham, and C . K. Zoltani, Numerical Solution to a System of Random Volterra...Sambandham, and C . K. Zoltani, "Numerical Solution to a System of Random Volterra Integral Equations I: Successive Approximation Method’,"-submitted to

Top