Sample records for hypothesis testing scheme

  1. On resilience studies of system detection and recovery techniques against stealthy insider attacks

    NASA Astrophysics Data System (ADS)

    Wei, Sixiao; Zhang, Hanlin; Chen, Genshe; Shen, Dan; Yu, Wei; Pham, Khanh D.; Blasch, Erik P.; Cruz, Jose B.

    2016-05-01

    With the explosive growth of network technologies, insider attacks have become a major concern to business operations that largely rely on computer networks. To better detect insider attacks that marginally manipulate network traffic over time, and to recover the system from attacks, in this paper we implement a temporal-based detection scheme using the sequential hypothesis testing technique. Two hypothetical states are considered: the null hypothesis that the collected information is from benign historical traffic and the alternative hypothesis that the network is under attack. The objective of such a detection scheme is to recognize the change within the shortest time by comparing the two defined hypotheses. In addition, once the attack is detected, a server migration-based system recovery scheme can be triggered to recover the system to the state prior to the attack. To understand mitigation of insider attacks, a multi-functional web display of the detection analysis was developed for real-time analytic. Experiments using real-world traffic traces evaluate the effectiveness of Detection System and Recovery (DeSyAR) scheme. The evaluation data validates the detection scheme based on sequential hypothesis testing and the server migration-based system recovery scheme can perform well in effectively detecting insider attacks and recovering the system under attack.

  2. Practical scheme for optimal measurement in quantum interferometric devices

    NASA Astrophysics Data System (ADS)

    Takeoka, Masahiro; Ban, Masashi; Sasaki, Masahide

    2003-06-01

    We apply a Kennedy-type detection scheme, which was originally proposed for a binary communications system, to interferometric sensing devices. We show that the minimum detectable perturbation of the proposed system reaches the ultimate precision bound which is predicted by quantum Neyman-Pearson hypothesis testing. To provide concrete examples, we apply our interferometric scheme to phase shift detection by using coherent and squeezed probe fields.

  3. Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.

    PubMed

    Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter

    2015-12-01

    Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.

  4. Statistical considerations for agroforestry studies

    Treesearch

    James A. Baldwin

    1993-01-01

    Statistical topics that related to agroforestry studies are discussed. These included study objectives, populations of interest, sampling schemes, sample sizes, estimation vs. hypothesis testing, and P-values. In addition, a relatively new and very much improved histogram display is described.

  5. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  6. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  7. Problem Solving in Biology: A Methodology

    ERIC Educational Resources Information Center

    Wisehart, Gary; Mandell, Mark

    2008-01-01

    A methodology is described that teaches science process by combining informal logic and a heuristic for rating factual reliability. This system facilitates student hypothesis formation, testing, and evaluation of results. After problem solving with this scheme, students are asked to examine and evaluate arguments for the underlying principles of…

  8. A Scheme for Categorizing Traumatic Military Events

    ERIC Educational Resources Information Center

    Stein, Nathan R.; Mills, Mary Alice; Arditte, Kimberly; Mendoza, Crystal; Borah, Adam M.; Resick, Patricia A.; Litz, Brett T.

    2012-01-01

    A common assumption among clinicians and researchers is that war trauma primarily involves fear-based reactions to life-threatening situations. However, the authors believe that there are multiple types of trauma in the military context, each with unique perievent and postevent response patterns. To test this hypothesis, they reviewed structured…

  9. Evidence against the facilitation of the vergence command during saccade-vergence interactions.

    PubMed

    Hendel, Tal; Gur, Moshe

    2012-11-01

    Combined saccade-vergence movements result when gaze shifts are made to targets that differ both in direction and in depth from the momentary fixation point. Currently, there are two rivaling schemes to explain these eye movements. According to the first, such eye movements are due to a combination of a conjugate saccadic command and a symmetric vergence command; the two commands are not taken to be independent but instead are suggested to interact in a nonlinear manner, which leads to an intra-saccadic facilitation of the vergence command. According to the second scheme, the saccade generator is disconjugate, thus encoding vergence information in the saccadic commands themselves, and the remaining vergence requirement is provided by an asymmetric mechanism. Here, we test the scheme that suggests an intra-saccadic facilitation of the vergence command. We analyze this scheme and show that it has two fundamental properties. The first is that the vergence command is always symmetric, even during the intra-saccadic facilitation. The second is that the facilitated (and symmetric) vergence command sums linearly with the conjugate saccadic command at the final common pathway. Taking these properties together, this scheme predicts that the total magnitude of the saccadic component of combined saccade-vergence movements can be decomposed into a conjugate part and a symmetric part. When we tested this prediction in combined saccade-vergence movements of humans, we found that it was not confirmed. Thus, our results are incompatible with the facilitation of the vergence command hypothesis. Although these results do not directly verify the rivaling hypothesis, which suggests a disconjugate saccade generator, they do provide it with indirect support.

  10. Whiplash and the compensation hypothesis.

    PubMed

    Spearing, Natalie M; Connelly, Luke B

    2011-12-01

    Review article. To explain why the evidence that compensation-related factors lead to worse health outcomes is not compelling, either in general, or in the specific case of whiplash. There is a common view that compensation-related factors lead to worse health outcomes ("the compensation hypothesis"), despite the presence of important, and unresolved sources of bias. The empirical evidence on this question has ramifications for the design of compensation schemes. Using studies on whiplash, this article outlines the methodological problems that impede attempts to confirm or refute the compensation hypothesis. Compensation studies are prone to measurement bias, reverse causation bias, and selection bias. Errors in measurement are largely due to the latent nature of whiplash injuries and health itself, a lack of clarity over the unit of measurement (specific factors, or "compensation"), and a lack of appreciation for the heterogeneous qualities of compensation-related factors and schemes. There has been a failure to acknowledge and empirically address reverse causation bias, or the likelihood that poor health influences the decision to pursue compensation: it is unclear if compensation is a cause or a consequence of poor health, or both. Finally, unresolved selection bias (and hence, confounding) is evident in longitudinal studies and natural experiments. In both cases, between-group differences have not been addressed convincingly. The nature of the relationship between compensation-related factors and health is unclear. Current approaches to testing the compensation hypothesis are prone to several important sources of bias, which compromise the validity of their results. Methods that explicitly test the hypothesis and establish whether or not a causal relationship exists between compensation factors and prolonged whiplash symptoms are needed in future studies.

  11. Deciphering the crowd: modeling and identification of pedestrian group motion.

    PubMed

    Yücel, Zeynep; Zanlungo, Francesco; Ikeda, Tetsushi; Miyashita, Takahiro; Hagita, Norihiro

    2013-01-14

    Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation.

  12. Deciphering the Crowd: Modeling and Identification of Pedestrian Group Motion

    PubMed Central

    Yücel, Zeynep; Zanlungo, Francesco; Ikeda, Tetsushi; Miyashita, Takahiro; Hagita, Norihiro

    2013-01-01

    Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation. PMID:23344382

  13. Testing the social competition hypothesis of depression using a simple economic game.

    PubMed

    Kupferberg, Aleksandra; Hager, Oliver M; Fischbacher, Urs; Brändle, Laura S; Haynes, Melanie; Hasler, Gregor

    2016-03-01

    Price's social competition hypothesis interprets the depressive state as an unconscious, involuntary losing strategy, which enables individuals to yield and accept defeat in competitive situations. We investigated whether patients who suffer from major depressive disorder (MDD) would avoid competition more often than either patients suffering from borderline personality disorder (BPD) or healthy controls. In a simple paper-folding task healthy participants and patiens with MDD and BPD were matched with two opponents, one with an unknown diagnosis and one who shared their clinical diagnosis, and they had to choose either a competitive or cooperative payment scheme for task completion. When playing against an unknown opponent, but not the opponent with the same diagnosis, the patients with depression chose the competitive payment scheme statistically less often than healthy controls and patients diagnosed with BPD. The competition avoidance against the unknown opponent is consistent with Price's social competition hypothesis. G.H. received research support, consulting fees and speaker honoraria from Lundbeck, AstraZeneca, Servier, Eli Lilly, Roche and Novartis. © The Royal College of Psychiatrists 2016. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) licence.

  14. How Should School Districts Shape Teacher Salary Schedules? Linking School Performance to Pay Structure in Traditional Compensation Schemes

    ERIC Educational Resources Information Center

    Grissom, Jason A.; Strunk, Katharine O.

    2012-01-01

    This study examines the relative distribution of salary schedule returns to experience for beginning and veteran teachers. We argue that districts are likely to benefit from structuring salary schedules with greater experience returns early in the teaching career. To test this hypothesis, we match salary data to school-level student performance…

  15. Implementation of a model based fault detection and diagnosis for actuation faults of the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.

    1992-01-01

    In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.

  16. Examination of the gamma equilibrium point hypothesis when applied to single degree of freedom movements performed with different inertial loads.

    PubMed

    Bellomo, A; Inbar, G

    1997-01-01

    One of the theories of human motor control is the gamma Equilibrium Point Hypothesis. It is an attractive theory since it offers an easy control scheme where the planned trajectory shifts monotionically from an initial to a final equilibrium state. The feasibility of this model was tested by reconstructing the virtual trajectory and the stiffness profiles for movements performed with different inertial loads and examining them. Three types of movements were tested: passive movements, targeted movements, and repetitive movements. Each of the movements was performed with five different inertial loads. Plausible virtual trajectories and stiffness profiles were reconstructed based on the gamma Equilibrium Point Hypothesis for the three different types of movements performed with different inertial loads. However, the simple control strategy supported by the model, where the planned trajectory shifts monotonically from an initial to a final equilibrium state, could not be supported for targeted movements performed with added inertial load. To test the feasibility of the model further we must examine the probability that the human motor control system would choose a trajectory more complicated than the actual trajectory to control.

  17. Angular spectral framework to test full corrections of paraxial solutions.

    PubMed

    Mahillo-Isla, R; González-Morales, M J

    2015-07-01

    Different correction methods for paraxial solutions have been used when such solutions extend out of the paraxial regime. The authors have used correction methods guided by either their experience or some educated hypothesis pertinent to the particular problem that they were tackling. This article provides a framework so as to classify full wave correction schemes. Thus, for a given solution of the paraxial wave equation, we can select the best correction scheme of those available. Some common correction methods are considered and evaluated under the proposed scope. Another remarkable contribution is obtained by giving the necessary conditions that two solutions of the Helmholtz equation must accomplish to accept a common solution of the parabolic wave equation as a paraxial approximation of both solutions.

  18. An optical potential for the statically deformed actinide nuclei derived from a global spherical potential

    NASA Astrophysics Data System (ADS)

    Al-Rawashdeh, S. M.; Jaghoub, M. I.

    2018-04-01

    In this work we test the hypothesis that a properly deformed spherical optical potential, used within a channel-coupling scheme, provides a good description for the scattering data corresponding to neutron induced reactions on the heavy, statically deformed actinides and other lighter deformed nuclei. To accomplish our goal, we have deformed the Koning-Delaroche spherical global potential and then used it in a channel-coupling scheme. The ground-state is coupled to a sufficient number of inelastic rotational channels belonging to the ground-state band to ensure convergence. The predicted total cross sections, elastic and inelastic angular distributions are in good agreement with the experimental data. As a further test, we compare our results to those obtained by a global channel-coupled optical model whose parameters were obtained by fitting elastic and inelastic angular distributions in addition to total cross sections. Our results compare quite well with those obtained by the fitted, channel-coupled optical model. Below neutron incident energies of about 1MeV, our results show that scattering into the rotational excited states of the ground-state band plays a significant role in the scattering process and must be explicitly accounted for using a channel-coupling scheme.

  19. Improving the space surveillance telescope's performance using multi-hypothesis testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chris Zingarelli, J.; Cain, Stephen; Pearce, Eric

    2014-05-01

    The Space Surveillance Telescope (SST) is a Defense Advanced Research Projects Agency program designed to detect objects in space like near Earth asteroids and space debris in the geosynchronous Earth orbit (GEO) belt. Binary hypothesis test (BHT) methods have historically been used to facilitate the detection of new objects in space. In this paper a multi-hypothesis detection strategy is introduced to improve the detection performance of SST. In this context, the multi-hypothesis testing (MHT) determines if an unresolvable point source is in either the center, a corner, or a side of a pixel in contrast to BHT, which only testsmore » whether an object is in the pixel or not. The images recorded by SST are undersampled such as to cause aliasing, which degrades the performance of traditional detection schemes. The equations for the MHT are derived in terms of signal-to-noise ratio (S/N), which is computed by subtracting the background light level around the pixel being tested and dividing by the standard deviation of the noise. A new method for determining the local noise statistics that rejects outliers is introduced in combination with the MHT. An experiment using observations of a known GEO satellite are used to demonstrate the improved detection performance of the new algorithm over algorithms previously reported in the literature. The results show a significant improvement in the probability of detection by as much as 50% over existing algorithms. In addition to detection, the S/N results prove to be linearly related to the least-squares estimates of point source irradiance, thus improving photometric accuracy.« less

  20. Skills for the literacy process.

    PubMed

    Côrrea, Kelli Cristina do Prado; Machado, Maria Aparecida Miranda de Paula; Hage, Simone Rocha de Vasconcellos

    2018-03-01

    Examine a set of competencies in children beginning the process of literacy and find whether there is positive correlation with their level of writing. Study conducted with 70 six-year-old students enrolled in the first year of Elementary School in municipal schools. The children were submitted to the Initial Reading and Writing Competence Assessment Battery (BACLE) and the Diagnostic Probing Protocol for classification of their level of writing. Descriptive statistical analysis and the Spearman coefficient were used for correlation between instruments. The students presented satisfactory performance in the tasks of the BACLE. Regarding the writing hypothesis, most children presented syllabic level with sound value. Significant positive correlation was observed between body scheme/time-space orientation and language skills. The group of schoolchildren performed satisfactorily on tests that measure pre-reading and writing skills. The areas of body scheme/time-space orientation and language presented significant correlation with the level of writing hypothesis, indicating that children with higher scores in these areas present better levels of writing. Identification of the necessary competencies for learning of reading and writing can provide teachers and educational audiology professionals with conditions for evaluation and early intervention in certain abilities for the development of reading and writing.

  1. Dual adaptive dynamic control of mobile robots using neural networks.

    PubMed

    Bugeja, Marvin K; Fabri, Simon G; Camilleri, Liberato

    2009-02-01

    This paper proposes two novel dual adaptive neural control schemes for the dynamic control of nonholonomic mobile robots. The two schemes are developed in discrete time, and the robot's nonlinear dynamic functions are assumed to be unknown. Gaussian radial basis function and sigmoidal multilayer perceptron neural networks are used for function approximation. In each scheme, the unknown network parameters are estimated stochastically in real time, and no preliminary offline neural network training is used. In contrast to other adaptive techniques hitherto proposed in the literature on mobile robots, the dual control laws presented in this paper do not rely on the heuristic certainty equivalence property but account for the uncertainty in the estimates. This results in a major improvement in tracking performance, despite the plant uncertainty and unmodeled dynamics. Monte Carlo simulation and statistical hypothesis testing are used to illustrate the effectiveness of the two proposed stochastic controllers as applied to the trajectory-tracking problem of a differentially driven wheeled mobile robot.

  2. Improving the Space Surveillance Telescope's Performance Using Multi-Hypothesis Testing

    NASA Astrophysics Data System (ADS)

    Zingarelli, J. Chris; Pearce, Eric; Lambour, Richard; Blake, Travis; Peterson, Curtis J. R.; Cain, Stephen

    2014-05-01

    The Space Surveillance Telescope (SST) is a Defense Advanced Research Projects Agency program designed to detect objects in space like near Earth asteroids and space debris in the geosynchronous Earth orbit (GEO) belt. Binary hypothesis test (BHT) methods have historically been used to facilitate the detection of new objects in space. In this paper a multi-hypothesis detection strategy is introduced to improve the detection performance of SST. In this context, the multi-hypothesis testing (MHT) determines if an unresolvable point source is in either the center, a corner, or a side of a pixel in contrast to BHT, which only tests whether an object is in the pixel or not. The images recorded by SST are undersampled such as to cause aliasing, which degrades the performance of traditional detection schemes. The equations for the MHT are derived in terms of signal-to-noise ratio (S/N), which is computed by subtracting the background light level around the pixel being tested and dividing by the standard deviation of the noise. A new method for determining the local noise statistics that rejects outliers is introduced in combination with the MHT. An experiment using observations of a known GEO satellite are used to demonstrate the improved detection performance of the new algorithm over algorithms previously reported in the literature. The results show a significant improvement in the probability of detection by as much as 50% over existing algorithms. In addition to detection, the S/N results prove to be linearly related to the least-squares estimates of point source irradiance, thus improving photometric accuracy. The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

  3. Sensor Fault Detection and Diagnosis Simulation of a Helicopter Engine in an Intelligent Control Framework

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet

    1994-01-01

    This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.

  4. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  5. Motor Synergies and the Equilibrium-Point Hypothesis

    PubMed Central

    Latash, Mark L.

    2010-01-01

    The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multi-joint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed. PMID:20702893

  6. Motor synergies and the equilibrium-point hypothesis.

    PubMed

    Latash, Mark L

    2010-07-01

    The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multijoint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed.

  7. In Response: Biological arguments for selecting effect sizes in ecotoxicological testing—A governmental perspective

    USGS Publications Warehouse

    Mebane, Christopher A.

    2015-01-01

    Criticisms of the uses of the no-observed-effect concentration (NOEC) and the lowest-observed-effect concentration (LOEC) and more generally the entire null hypothesis statistical testing scheme are hardly new or unique to the field of ecotoxicology [1-4]. Among the criticisms of NOECs and LOECs is that statistically similar LOECs (in terms of p value) can represent drastically different levels of effect. For instance, my colleagues and I found that a battery of chronic toxicity tests with different species and endpoints yielded LOECs with minimum detectable differences ranging from 3% to 48% reductions from controls [5].

  8. EXPERIMENTAL STUDIES OF THE EVOLUTIONARY SIGNIFICANCE OF SEXUAL REPRODUCTION. V. A FIELD TEST OF THE SIB-COMPETITION LOTTERY HYPOTHESIS.

    PubMed

    Kelley, Steven E

    1989-08-01

    Sexually and asexually derived tillers of Anthoxanthum odoratum were planted directly in the field to test the hypothesis that competition among groups of sexual and asexual siblings favors the maintenance of sexual reproduction in populations. The results showed a substantial fitness advantage for sexual tillers. However, in contrast with the models, the advantage of sex did not increase with increasing numbers of colonists in the patch, there were multiple survivors among colonists, and an advantage was observed even for singly planted tillers. When a truncation-selection scheme was imposed ex post facto on the data, the relative performance of sexual tillers was similar to that predicted by the Bulmer (1980) model, suggesting that sib-competition models fail due to the violation of the assumption of truncation selection. The advantage of sex was not correlated with the presence of other species, total percentage cover, or species diversity, although sites where sex was favored were physically clustered. © 1989 The Society for the Study of Evolution.

  9. Design and analysis of multihypothesis motion-compensated prediction (MHMCP) codec for error-resilient visual communications

    NASA Astrophysics Data System (ADS)

    Kung, Wei-Ying; Kim, Chang-Su; Kuo, C.-C. Jay

    2004-10-01

    A multi-hypothesis motion compensated prediction (MHMCP) scheme, which predicts a block from a weighted superposition of more than one reference blocks in the frame buffer, is proposed and analyzed for error resilient visual communication in this research. By combining these reference blocks effectively, MHMCP can enhance the error resilient capability of compressed video as well as achieve a coding gain. In particular, we investigate the error propagation effect in the MHMCP coder and analyze the rate-distortion performance in terms of the hypothesis number and hypothesis coefficients. It is shown that MHMCP suppresses the short-term effect of error propagation more effectively than the intra refreshing scheme. Simulation results are given to confirm the analysis. Finally, several design principles for the MHMCP coder are derived based on the analytical and experimental results.

  10. Offending outcomes of a mental health youth diversion pilot scheme in England.

    PubMed

    Haines, Alina; Lane, Steven; McGuire, James; Perkins, Elizabeth; Whittington, Richard

    2015-04-01

    A youth justice diversion scheme designed to enhance health provision for young people with mental health and developmental problems as soon as they enter the youth justice system has been piloted in six areas of England. As part of a wider evaluation of the first youth justice diversion scheme outside the USA, our aim here was to examine re-offending. We sought to test the hypothesis that a specialised service for young people with mental health difficulties would be associated with reductions in re-offending. In addition, we examined factors associated with the re-offending that occurred. Two hundred and eight young offenders with access to the diversion scheme and 200 without were compared in four geographical area pairings to allow for socio-demographic contextual differences. Officially recorded re-offending was ascertained for 15-30 months after study entry. We also tested characteristics associated with re-offending among everyone entering the diversion scheme (n = 870). There was no statistically significant difference in re-offending rates between the diversion and comparison samples, but those with access to diversion had significantly longer periods of desistance from offending than those who did not. In multivariate analysis, the only significant characteristic associated with re-offending was history of previous offending. Prevention of re-offending is only one of the potentially beneficial outcomes of diversion of young people who are vulnerable because of mental health problems, but it is an important one. The advantage of longer survival without prevention of re-offending suggests that future research should explore critical timings for these young people. The equivocal nature of the findings suggests that a randomised controlled trial would be justified. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Teaching the Conceptual Scheme "The Particle Nature of Matter" in the Elementary School.

    ERIC Educational Resources Information Center

    Pella, Milton O.; And Others

    Conclusions of an extensive project aimed to prepare lessons and associated materials related to teaching concepts included in the scheme "The Particle Nature of Matter" for grades two through six are presented. The hypothesis formulated for the project was that children in elementary schools can learn theoretical concepts related to the particle…

  12. Do outcomes differ between work and non-work-related injury in a universal injury compensation system? Findings from the New Zealand Prospective Outcomes of Injury Study

    PubMed Central

    2013-01-01

    Background Poorer recovery outcomes for workers injured in a work setting, as opposed to a non-work setting, are commonly attributed to differences in financial gain via entitlement to compensation by injury setting (ie. workers compensation schemes). To date, this attribution hasn’t been tested in a situation where both work and non-work-related injuries have an equivalent entitlement to compensation. This study tests the hypothesis that there will be no differences in recovery outcomes for workers by injury setting (work and non-work) within a single universal entitlement injury compensation scheme. Methods Workforce active participants from the Prospective Outcomes of Injury Study (POIS) cohort were followed up at 3- and 12-months following injury. Participants who were injured in the period June 2007- May 2009 were recruited from New Zealand’s universal entitlement injury compensation scheme managed by the Accident Compensation Corporation (ACC). An analysis of ten vocational, disability, functional and psychological recovery outcomes was undertaken by injury setting. Modified Poisson regression analyses were undertaken to examine the relationship between injury setting and recovery outcomes. Results Of 2092 eligible participants, 741 (35%) had sustained an injury in a work setting. At 3 months, workers with work-related injuries had an elevated risk of work absence however, this difference disappeared after controlling for confounding variables (adjusted RR 1.10, 95% CI 0.94-1.29). By 12 months, workers with work-related injuries had poorer recovery outcomes with a higher risk of absence from work (aRR 1.37, 95% CI 1.10-1.70), mobility-related functional problems (aRR 1.35, 95% CI 1.14-1.60), disability (aRR 1.32, 95% CI 1.04-1.68) and impaired functioning related to anxiety/depression (aRR 1.21, 95% CI 1.00-1.46). Conclusion Our study, comparing recovery outcomes for workers by injury setting within a single universal entitlement injury compensation scheme, found mixed support for the hypothesis tested. After adjustment for possible covariates recovery outcomes did not differ by injury setting at 3 months following injury, however, by 12 months vocational, disability and some functional outcomes, were poorer for workers with work-related injuries. Given our findings, and other potential mechanisms for poorer outcomes for workers with work-related injuries, further research beyond differences in entitlement to compensation should be undertaken to inform future interventions. PMID:24148609

  13. A General Relativistic Null Hypothesis Test with Event Horizon Telescope Observations of the Black Hole Shadow in Sgr A*

    NASA Astrophysics Data System (ADS)

    Psaltis, Dimitrios; Özel, Feryal; Chan, Chi-Kwan; Marrone, Daniel P.

    2015-12-01

    The half opening angle of a Kerr black hole shadow is always equal to (5 ± 0.2)GM/Dc2, where M is the mass of the black hole and D is its distance from the Earth. Therefore, measuring the size of a shadow and verifying whether it is within this 4% range constitutes a null hypothesis test of general relativity. We show that the black hole in the center of the Milky Way, Sgr A*, is the optimal target for performing this test with upcoming observations using the Event Horizon Telescope (EHT). We use the results of optical/IR monitoring of stellar orbits to show that the mass-to-distance ratio for Sgr A* is already known to an accuracy of ∼4%. We investigate our prior knowledge of the properties of the scattering screen between Sgr A* and the Earth, the effects of which will need to be corrected for in order for the black hole shadow to appear sharp against the background emission. Finally, we explore an edge detection scheme for interferometric data and a pattern matching algorithm based on the Hough/Radon transform and demonstrate that the shadow of the black hole at 1.3 mm can be localized, in principle, to within ∼9%. All these results suggest that our prior knowledge of the properties of the black hole, of scattering broadening, and of the accretion flow can only limit this general relativistic null hypothesis test with EHT observations of Sgr A* to ≲10%.

  14. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    PubMed

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  15. A GENERAL RELATIVISTIC NULL HYPOTHESIS TEST WITH EVENT HORIZON TELESCOPE OBSERVATIONS OF THE BLACK HOLE SHADOW IN Sgr A*

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Psaltis, Dimitrios; Özel, Feryal; Chan, Chi-Kwan

    2015-12-01

    The half opening angle of a Kerr black hole shadow is always equal to (5 ± 0.2)GM/Dc{sup 2}, where M is the mass of the black hole and D is its distance from the Earth. Therefore, measuring the size of a shadow and verifying whether it is within this 4% range constitutes a null hypothesis test of general relativity. We show that the black hole in the center of the Milky Way, Sgr A*, is the optimal target for performing this test with upcoming observations using the Event Horizon Telescope (EHT). We use the results of optical/IR monitoring of stellar orbits to showmore » that the mass-to-distance ratio for Sgr A* is already known to an accuracy of ∼4%. We investigate our prior knowledge of the properties of the scattering screen between Sgr A* and the Earth, the effects of which will need to be corrected for in order for the black hole shadow to appear sharp against the background emission. Finally, we explore an edge detection scheme for interferometric data and a pattern matching algorithm based on the Hough/Radon transform and demonstrate that the shadow of the black hole at 1.3 mm can be localized, in principle, to within ∼9%. All these results suggest that our prior knowledge of the properties of the black hole, of scattering broadening, and of the accretion flow can only limit this general relativistic null hypothesis test with EHT observations of Sgr A* to ≲10%.« less

  16. A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows

    NASA Astrophysics Data System (ADS)

    Lei, Xin; Li, Jiequan

    2018-04-01

    This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.

  17. Challenges in provider payment under the Ghana National Health Insurance Scheme: a case study of claims management in two districts.

    PubMed

    Sodzi-Tettey, S; Aikins, M; Awoonor-Williams, J K; Agyepong, I A

    2012-12-01

    In 2004, Ghana started implementing a National Health Insurance Scheme (NHIS) to remove cost as a barrier to quality healthcare. Providers were initially paid by fee - for - service. In May 2008, this changed to paying providers by a combination of Ghana - Diagnostic Related Groupings (G-DRGs) for services and fee - for - service for medicines through the claims process. The study evaluated the claims management processes for two District MHIS in the Upper East Region of Ghana. Retrospective review of secondary claims data (2008) and a prospective observation of claims management (2009) were undertaken. Qualitative and quantitative approaches were used for primary data collection using interview guides and checklists. The reimbursements rates and value of rejected claims were calculated and compared for both districts using the z test. The null hypothesis was that no differences existed in parameters measured. Claims processes in both districts were similar and predominantly manual. There were administrative capacity, technical, human resource and working environment challenges contributing to delays in claims submission by providers and vetting and payment by schemes. Both Schemes rejected less than 1% of all claims submitted. Significant differences were observed between the Total Reimbursement Rates (TRR) and the Total Timely Reimbursement Rates (TTRR) for both schemes. For TRR, 89% and 86% were recorded for Kassena Nankana and Builsa Schemes respectively while for TTRR, 45% and 28% were recorded respectively. Ghana's NHIS needs to reform its provider payment and claims submission and processing systems to ensure simpler and faster processes. Computerization and investment to improve the capacity to administer for both purchasers and providers will be key in any reform.

  18. Wavelets, ridgelets, and curvelets for Poisson noise removal.

    PubMed

    Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc

    2008-07-01

    In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.

  19. A residual based adaptive unscented Kalman filter for fault recovery in attitude determination system of microsatellites

    NASA Astrophysics Data System (ADS)

    Le, Huy Xuan; Matunaga, Saburo

    2014-12-01

    This paper presents an adaptive unscented Kalman filter (AUKF) to recover the satellite attitude in a fault detection and diagnosis (FDD) subsystem of microsatellites. The FDD subsystem includes a filter and an estimator with residual generators, hypothesis tests for fault detections and a reference logic table for fault isolations and fault recovery. The recovery process is based on the monitoring of mean and variance values of each attitude sensor behaviors from residual vectors. In the case of normal work, the residual vectors should be in the form of Gaussian white noise with zero mean and fixed variance. When the hypothesis tests for the residual vectors detect something unusual by comparing the mean and variance values with dynamic thresholds, the AUKF with real-time updated measurement noise covariance matrix will be used to recover the sensor faults. The scheme developed in this paper resolves the problem of the heavy and complex calculations during residual generations and therefore the delay in the isolation process is reduced. The numerical simulations for TSUBAME, a demonstration microsatellite of Tokyo Institute of Technology, are conducted and analyzed to demonstrate the working of the AUKF and FDD subsystem.

  20. A Rapid Approach to Modeling Species-Habitat Relationships

    NASA Technical Reports Server (NTRS)

    Carter, Geoffrey M.; Breinger, David R.; Stolen, Eric D.

    2005-01-01

    A growing number of species require conservation or management efforts. Success of these activities requires knowledge of the species' occurrence pattern. Species-habitat models developed from GIS data sources are commonly used to predict species occurrence but commonly used data sources are often developed for purposes other than predicting species occurrence and are of inappropriate scale and the techniques used to extract predictor variables are often time consuming and cannot be repeated easily and thus cannot efficiently reflect changing conditions. We used digital orthophotographs and a grid cell classification scheme to develop an efficient technique to extract predictor variables. We combined our classification scheme with a priori hypothesis development using expert knowledge and a previously published habitat suitability index and used an objective model selection procedure to choose candidate models. We were able to classify a large area (57,000 ha) in a fraction of the time that would be required to map vegetation and were able to test models at varying scales using a windowing process. Interpretation of the selected models confirmed existing knowledge of factors important to Florida scrub-jay habitat occupancy. The potential uses and advantages of using a grid cell classification scheme in conjunction with expert knowledge or an habitat suitability index (HSI) and an objective model selection procedure are discussed.

  1. Analytical performances of food microbiology laboratories - critical analysis of 7 years of proficiency testing results.

    PubMed

    Abdel Massih, M; Planchon, V; Polet, M; Dierick, K; Mahillon, J

    2016-02-01

    Based on the results of 19 food microbiology proficiency testing (PT) schemes, this study aimed to assess the laboratory performances, to highlight the main sources of unsatisfactory analytical results and to suggest areas of improvement. The 2009-2015 results of REQUASUD and IPH PT, involving a total of 48 laboratories, were analysed. On average, the laboratories failed to detect or enumerate foodborne pathogens in 3·0% of the tests. Thanks to a close collaboration with the PT participants, the causes of outliers could be identified in 74% of the cases. The main causes of erroneous PT results were either pre-analytical (handling of the samples, timing of analysis), analytical (unsuitable methods, confusion of samples, errors in colony counting or confirmation) or postanalytical mistakes (calculation and encoding of results). PT schemes are a privileged observation post to highlight analytical problems, which would otherwise remain unnoticed. In this perspective, this comprehensive study of PT results provides insight into the sources of systematic errors encountered during the analyses. This study draws the attention of the laboratories to the main causes of analytical errors and suggests practical solutions to avoid them, in an educational purpose. The observations support the hypothesis that regular participation to PT, when followed by feed-back and appropriate corrective actions, can play a key role in quality improvement and provide more confidence in the laboratory testing results. © 2015 The Society for Applied Microbiology.

  2. A bootstrap based Neyman-Pearson test for identifying variable importance.

    PubMed

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-04-01

    Selection of most informative features that leads to a small loss on future data are arguably one of the most important steps in classification, data analysis and model selection. Several feature selection (FS) algorithms are available; however, due to noise present in any data set, FS algorithms are typically accompanied by an appropriate cross-validation scheme. In this brief, we propose a statistical hypothesis test derived from the Neyman-Pearson lemma for determining if a feature is statistically relevant. The proposed approach can be applied as a wrapper to any FS algorithm, regardless of the FS criteria used by that algorithm, to determine whether a feature belongs in the relevant set. Perhaps more importantly, this procedure efficiently determines the number of relevant features given an initial starting point. We provide freely available software implementations of the proposed methodology.

  3. [Dilemma of null hypothesis in ecological hypothesis's experiment test.

    PubMed

    Li, Ji

    2016-06-01

    Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.

  4. Nuclear Power Plant Thermocouple Sensor-Fault Detection and Classification Using Deep Learning and Generalized Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-06-01

    In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.

  5. A new CAD approach for improving efficacy of cancer screening

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Qian, Wei; Li, Lihua; Pu, Jiantao; Kang, Yan; Lure, Fleming; Tan, Maxine; Qiu, Yuchen

    2015-03-01

    Since performance and clinical utility of current computer-aided detection (CAD) schemes of detecting and classifying soft tissue lesions (e.g., breast masses and lung nodules) is not satisfactory, many researchers in CAD field call for new CAD research ideas and approaches. The purpose of presenting this opinion paper is to share our vision and stimulate more discussions of how to overcome or compensate the limitation of current lesion-detection based CAD schemes in the CAD research community. Since based on our observation that analyzing global image information plays an important role in radiologists' decision making, we hypothesized that using the targeted quantitative image features computed from global images could also provide highly discriminatory power, which are supplementary to the lesion-based information. To test our hypothesis, we recently performed a number of independent studies. Based on our published preliminary study results, we demonstrated that global mammographic image features and background parenchymal enhancement of breast MR images carried useful information to (1) predict near-term breast cancer risk based on negative screening mammograms, (2) distinguish between true- and false-positive recalls in mammography screening examinations, and (3) classify between malignant and benign breast MR examinations. The global case-based CAD scheme only warns a risk level of the cases without cueing a large number of false-positive lesions. It can also be applied to guide lesion-based CAD cueing to reduce false-positives but enhance clinically relevant true-positive cueing. However, before such a new CAD approach is clinically acceptable, more work is needed to optimize not only the scheme performance but also how to integrate with lesion-based CAD schemes in the clinical practice.

  6. Decentralising Zimbabwe’s water management: The case of Guyu-Chelesa irrigation scheme

    NASA Astrophysics Data System (ADS)

    Tambudzai, Rashirayi; Everisto, Mapedza; Gideon, Zhou

    Smallholder irrigation schemes are largely supply driven such that they exclude the beneficiaries on the management decisions and the choice of the irrigation schemes that would best suit their local needs. It is against this background that the decentralisation framework and the Dublin Principles on Integrated Water Resource Management (IWRM) emphasise the need for a participatory approach to water management. The Zimbabwean government has gone a step further in decentralising the management of irrigation schemes, that is promoting farmer managed irrigation schemes so as to ensure effective management of scarce community based land and water resources. The study set to investigate the way in which the Guyu-Chelesa irrigation scheme is managed with specific emphasis on the role of the Irrigation Management Committee (IMC), the level of accountability and the powers devolved to the IMC. Merrey’s 2008 critique of IWRM also informs this study which views irrigation as going beyond infrastructure by looking at how institutions and decision making processes play out at various levels including at the irrigation scheme level. The study was positioned on the hypothesis that ‘decentralised or autonomous irrigation management enhances the sustainability and effectiveness of irrigation schemes’. To validate or falsify the stated hypothesis, data was gathered using desk research in the form of reviewing articles, documents from within the scheme and field research in the form of questionnaire surveys, key informant interviews and field observation. The Statistical Package for Social Sciences was used to analyse data quantitatively, whilst content analysis was utilised to analyse qualitative data whereby data was analysed thematically. Comparative analysis was carried out as Guyu-Chelesa irrigation scheme was compared with other smallholder irrigation scheme’s experiences within Zimbabwe and the Sub Saharan African region at large. The findings were that whilst the scheme is a model of a decentralised entity whose importance lies at improving food security and employment creation within the community, it falls short in representing a downwardly accountable decentralised irrigation scheme. The scheme is faced with various challenges which include its operation which is below capacity utilisation, absence of specialised technical human personnel to address infrastructural breakdowns, uneven distribution of water pressure, incapacitated Irrigation Management Committee (IMC), absence of a locally legitimate constitution, compromised beneficiary participation and unclear lines of communication between various institutions involved in water management. Understanding decentralization is important since one of the key tenets of IWRM is stakeholder participation which the decentralization framework interrogates.

  7. Study designs appropriate for the workplace.

    PubMed

    Hogue, C J

    1986-01-01

    Carlo and Hearn have called for "refinement of old [epidemiologic] methods and an ongoing evaluation of where methods fit in the overall scheme as we address the multiple complexities of reproductive hazard assessment." This review is an attempt to bring together the current state-of-the-art methods for problem definition and hypothesis testing available to the occupational epidemiologist. For problem definition, meta analysis can be utilized to narrow the field of potential causal hypotheses. Passive active surveillance may further refine issues for analytic research. Within analytic epidemiology, several methods may be appropriate for the workplace setting. Those discussed here may be used to estimate the risk ratio in either a fixed or dynamic population.

  8. Information geometry and its application to theoretical statistics and diffusion tensor magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Wisniewski, Nicholas Andrew

    This dissertation is divided into two parts. First we present an exact solution to a generalization of the Behrens-Fisher problem by embedding the problem in the Riemannian manifold of Normal distributions. From this we construct a geometric hypothesis testing scheme. Secondly we investigate the most commonly used geometric methods employed in tensor field interpolation for DT-MRI analysis and cardiac computer modeling. We computationally investigate a class of physiologically motivated orthogonal tensor invariants, both at the full tensor field scale and at the scale of a single interpolation by doing a decimation/interpolation experiment. We show that Riemannian-based methods give the best results in preserving desirable physiological features.

  9. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    PubMed

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  10. Developmental origins of infant stress reactivity profiles: A multi-system approach.

    PubMed

    Rash, Joshua A; Thomas, Jenna C; Campbell, Tavis S; Letourneau, Nicole; Granger, Douglas A; Giesbrecht, Gerald F

    2016-07-01

    This study tested the hypothesis that maternal physiological and psychological variables during pregnancy discriminate between theoretically informed infant stress reactivity profiles. The sample comprised 254 women and their infants. Maternal mood, salivary cortisol, respiratory sinus arrhythmia (RSA), and salivary α-amylase (sAA) were assessed at 15 and 32 weeks gestational age. Infant salivary cortisol, RSA, and sAA reactivity were assessed in response to a structured laboratory frustration task at 6 months of age. Infant responses were used to classify them into stress reactivity profiles using three different classification schemes: hypothalamic-pituitary-adrenal (HPA)-axis, autonomic, and multi-system. Discriminant function analyses evaluated the prenatal variables that best discriminated infant reactivity profiles within each classification scheme. Maternal stress biomarkers, along with self-reported psychological distress during pregnancy, discriminated between infant stress reactivity profiles. These results suggest that maternal psychological and physiological states during pregnancy have broad effects on the development of the infant stress response systems. © 2016 Wiley Periodicals, Inc. Dev Psychobiol 58: 578-599, 2016. © 2016 Wiley Periodicals, Inc.

  11. Modeling error in assessment of mammographic image features for improved computer-aided mammography training: initial experience

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Tourassi, Georgia D.

    2011-03-01

    In this study we investigate the hypothesis that there exist patterns in erroneous assessment of BI-RADS image features among radiology trainees when performing diagnostic interpretation of mammograms. We also investigate whether these error making patterns can be captured by individual user models. To test our hypothesis we propose a user modeling algorithm that uses the previous readings of a trainee to identify whether certain BI-RADS feature values (e.g. "spiculated" value for "margin" feature) are associated with higher than usual likelihood that the feature will be assessed incorrectly. In our experiments we used readings of 3 radiology residents and 7 breast imaging experts for 33 breast masses for the following BI-RADS features: parenchyma density, mass margin, mass shape and mass density. The expert readings were considered as the gold standard. Rule-based individual user models were developed and tested using the leave one-one-out crossvalidation scheme. Our experimental evaluation showed that the individual user models are accurate in identifying cases for which errors are more likely to be made. The user models captured regularities in error making for all 3 residents. This finding supports our hypothesis about existence of individual error making patterns in assessment of mammographic image features using the BI-RADS lexicon. Explicit user models identifying the weaknesses of each resident could be of great use when developing and adapting a personalized training plan to meet the resident's individual needs. Such approach fits well with the framework of adaptive computer-aided educational systems in mammography we have proposed before.

  12. Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.

    PubMed

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya

    2013-03-01

    We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.

  13. Evaporation from irrigated crops: Its measurement, modeling and estimation from remotely sensed data

    NASA Astrophysics Data System (ADS)

    Garatuza-Payan, Jaime

    The research described in this dissertation is predicated on the hypothesis that remotely sensed information from climatological satellites can be used to estimate the actual evapotranspiration from agricultural crops to improve irrigation scheduling and water use efficiency. The goal of the enabling research program described here was to facilitate and demonstrate the potential use of satellite data for the rapid and routine estimation of water use by irrigated crops in the Yaqui Valley irrigation scheme, an extensive irrigated area in Sonora, Mexico. The approach taken was first, to measure and model the evapotranspiration and crop factors for wheat and cotton, the most common irrigated crops in the Yaqui Valley scheme. Second, to develop and test a high-resolution (4 km x 4 km) method for determining cloud cover and solar radiation from GOES satellite data. Then third, to demonstrate the application of satellite data to calculate the actual evaporation for sample crops in the Yaqui Valley scheme by combining estimates of potential rate with relevant crop factors and information on crop management. Results show that it is feasible to provide routine estimates of evaporation for the most common crops in the Yaqui Valley irrigation scheme from satellite data. Accordingly, a system to provide such estimates has been established and the Water Users Association, the entity responsible for water distribution in Yaqui Valley, can now use them to decide whether specific fields need irrigation. A Web site (teka-pucem.itson.mx) is also being created which will allow individual farmers to have direct access to the evaporation estimates via the Internet.

  14. Adaptive early detection ML/PDA estimator for LO targets with EO sensors

    NASA Astrophysics Data System (ADS)

    Chummun, Muhammad R.; Kirubarajan, Thiagalingam; Bar-Shalom, Yaakov

    2000-07-01

    The batch Maximum Likelihood Estimator, combined with Probabilistic Data (ML-PDA), has been shown to be effective in acquiring low observable (LO) - low SNR - non-maneuvering targets in the presence of heavy clutter. The use of signal strength or amplitude information (AI) in the ML-PDA estimator with AI in a sliding-window fashion, to detect high- speed targets in heavy clutter using electro-optical (EO) sensors. The initial time and the length of the sliding-window are adjusted adaptively according to the information content of the received measurements. A track validation scheme via hypothesis testing is developed to confirm the estimated track, that is, the presence of a target, in each window. The sliding-window ML-PDA approach, together with track validation, enables early detection by rejecting noninformative scans, target reacquisition in case of temporary target disappearance and the handling of targets with speeds evolving over time. The proposed algorithm is shown to detect the target, which is hidden in as many as 600 false alarms per scan, 10 frames earlier than the Multiple Hypothesis Tracking (MHT) algorithm.

  15. Radiatively driven stratosphere-troposphere interactions near the tops of tropical cloud clusters

    NASA Technical Reports Server (NTRS)

    Churchill, Dean D.; Houze, Robert A., Jr.

    1990-01-01

    Results are presented of two numerical simulations of the mechanism involved in the dehydration of air, using the model of Churchill (1988) and Churchill and Houze (1990) which combines the water and ice physics parameterizations and IR and solar-radiation parameterization with a convective adjustment scheme in a kinematic nondynamic framework. One simulation, a cirrus cloud simulation, was to test the Danielsen (1982) hypothesis of a dehydration mechanism for the stratosphere; the other was to simulate the mesoscale updraft in order to test an alternative mechanism for 'freeze-drying' the air. The results show that the physical processes simulated in the mesoscale updraft differ from those in the thin-cirrus simulation. While in the thin-cirrus case, eddy fluxes occur in response to IR radiative destabilization, and, hence, no net transfer occurs between troposphere and stratosphere, the mesosphere updraft case has net upward mass transport into the lower stratosphere.

  16. Development and application of a new comprehensive image-based classification scheme for coastal and benthic environments along the southeast Florida continental shelf

    NASA Astrophysics Data System (ADS)

    Makowski, Christopher

    The coastal (terrestrial) and benthic environments along the southeast Florida continental shelf show a unique biophysical succession of marine features from a highly urbanized, developed coastal region in the north (i.e. northern Miami-Dade County) to a protective marine sanctuary in the southeast (i.e. Florida Keys National Marine Sanctuary). However, the establishment of a standard bio-geomorphological classification scheme for this area of coastal and benthic environments is lacking. The purpose of this study was to test the hypothesis and answer the research question of whether new parameters of integrating geomorphological components with dominant biological covers could be developed and applied across multiple remote sensing platforms for an innovative way to identify, interpret, and classify diverse coastal and benthic environments along the southeast Florida continental shelf. An ordered manageable hierarchical classification scheme was developed to incorporate the categories of Physiographic Realm, Morphodynamic Zone, Geoform, Landform, Dominant Surface Sediment, and Dominant Biological Cover. Six different remote sensing platforms (i.e. five multi-spectral satellite image sensors and one high-resolution aerial orthoimagery) were acquired, delineated according to the new classification scheme, and compared to determine optimal formats for classifying the study area. Cognitive digital classification at a nominal scale of 1:6000 proved to be more accurate than autoclassification programs and therefore used to differentiate coastal marine environments based on spectral reflectance characteristics, such as color, tone, saturation, pattern, and texture of the seafloor topology. In addition, attribute tables were created in conjugation with interpretations to quantify and compare the spatial relationships between classificatory units. IKONOS-2 satellite imagery was determined to be the optimal platform for applying the hierarchical classification scheme. However, each remote sensing platform had beneficial properties depending on research goals, logistical restrictions, and financial support. This study concluded that a new hierarchical comprehensive classification scheme for identifying coastal marine environments along the southeast Florida continental shelf could be achieved by integrating geomorphological features with biological coverages. This newly developed scheme, which can be applied across multiple remote sensing platforms with GIS software, establishes an innovative classification protocol to be used in future research studies.

  17. Main dimensions of human practical directives system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewicka-Strzalecka, A.

    1992-12-31

    A hypothesis is made that due to the uncertainty and complexity of the practical inference schemes, the acting subject exerts his/her own system of beliefs about efficient ways of attaining the given goals. These beliefs are termed here: Practical Directives, and their system: Practical Attitude. An attempt was made to reconstruct such a system and its main dimensions. To this end, an instrument was constructed: the Questionnaire of Practical Directives (QPD), which is meant as an operational definition of Practical Attitude. A group of 218 subjects was tested with the aid of QPD and the factor analysis of the resultsmore » revealed nine factors interpreted as main dimensions of the system of Practical Directives. 19 refs.« less

  18. Stages in Learning Motor Synergies: A View Based on the Equilibrium-Point Hypothesis

    PubMed Central

    Latash, Mark L.

    2009-01-01

    This review describes a novel view on stages in motor learning based on recent developments of the notion of synergies, the uncontrolled manifold hypothesis, and the equilibrium-point hypothesis (referent configuration) that allow to merge these notions into a single scheme of motor control. The principle of abundance and the principle of minimal final action form the foundation for analyses of natural motor actions performed by redundant sets of elements. Two main stages of motor learning are introduced corresponding to (1) discovery and strengthening of motor synergies stabilizing salient performance variable(s), and (2) their weakening when other aspects of motor performance are optimized. The first stage may be viewed as consisting of two steps, the elaboration of an adequate referent configuration trajectory and the elaboration of multi-joint (multi-muscle) synergies stabilizing the referent configuration trajectory. Both steps are expected to lead to more variance in the space of elemental variables that is compatible with a desired time profile of the salient performance variable (“good variability”). Adjusting control to other aspects of performance during the second stage (for example, esthetics, energy expenditure, time, fatigue, etc.) may lead to a drop in the “good variability”. Experimental support for the suggested scheme is reviewed. PMID:20060610

  19. Stages in learning motor synergies: a view based on the equilibrium-point hypothesis.

    PubMed

    Latash, Mark L

    2010-10-01

    This review describes a novel view on stages in motor learning based on recent developments of the notion of synergies, the uncontrolled manifold hypothesis, and the equilibrium-point hypothesis (referent configuration) that allow to merge these notions into a single scheme of motor control. The principle of abundance and the principle of minimal final action form the foundation for analyses of natural motor actions performed by redundant sets of elements. Two main stages of motor learning are introduced corresponding to (1) discovery and strengthening of motor synergies stabilizing salient performance variable(s) and (2) their weakening when other aspects of motor performance are optimized. The first stage may be viewed as consisting of two steps, the elaboration of an adequate referent configuration trajectory and the elaboration of multi-joint (multi-muscle) synergies stabilizing the referent configuration trajectory. Both steps are expected to lead to more variance in the space of elemental variables that is compatible with a desired time profile of the salient performance variable ("good variability"). Adjusting control to other aspects of performance during the second stage (for example, esthetics, energy expenditure, time, fatigue, etc.) may lead to a drop in the "good variability". Experimental support for the suggested scheme is reviewed. Copyright © 2009 Elsevier B.V. All rights reserved.

  20. Statistical evaluation of synchronous spike patterns extracted by frequent item set mining

    PubMed Central

    Torre, Emiliano; Picado-Muiño, David; Denker, Michael; Borgelt, Christian; Grün, Sonja

    2013-01-01

    We recently proposed frequent itemset mining (FIM) as a method to perform an optimized search for patterns of synchronous spikes (item sets) in massively parallel spike trains. This search outputs the occurrence count (support) of individual patterns that are not trivially explained by the counts of any superset (closed frequent item sets). The number of patterns found by FIM makes direct statistical tests infeasible due to severe multiple testing. To overcome this issue, we proposed to test the significance not of individual patterns, but instead of their signatures, defined as the pairs of pattern size z and support c. Here, we derive in detail a statistical test for the significance of the signatures under the null hypothesis of full independence (pattern spectrum filtering, PSF) by means of surrogate data. As a result, injected spike patterns that mimic assembly activity are well detected, yielding a low false negative rate. However, this approach is prone to additionally classify patterns resulting from chance overlap of real assembly activity and background spiking as significant. These patterns represent false positives with respect to the null hypothesis of having one assembly of given signature embedded in otherwise independent spiking activity. We propose the additional method of pattern set reduction (PSR) to remove these false positives by conditional filtering. By employing stochastic simulations of parallel spike trains with correlated activity in form of injected spike synchrony in subsets of the neurons, we demonstrate for a range of parameter settings that the analysis scheme composed of FIM, PSF and PSR allows to reliably detect active assemblies in massively parallel spike trains. PMID:24167487

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benetti, Micol; Alcaniz, Jailson S.; Landau, Susana J., E-mail: micolbenetti@on.br, E-mail: slandau@df.uba.ar, E-mail: alcaniz@on.br

    The hypothesis of the self-induced collapse of the inflaton wave function was proposed as responsible for the emergence of inhomogeneity and anisotropy at all scales. This proposal was studied within an almost de Sitter space-time approximation for the background, which led to a perfect scale-invariant power spectrum, and also for a quasi-de Sitter background, which allows to distinguish departures from the standard approach due to the inclusion of the collapse hypothesis. In this work we perform a Bayesian model comparison for two different choices of the self-induced collapse in a full quasi-de Sitter expansion scenario. In particular, we analyze themore » possibility of detecting the imprint of these collapse schemes at low multipoles of the anisotropy temperature power spectrum of the Cosmic Microwave Background (CMB) using the most recent data provided by the Planck Collaboration. Our results show that one of the two collapse schemes analyzed provides the same Bayesian evidence of the minimal standard cosmological model ΛCDM, while the other scenario is weakly disfavoured with respect to the standard cosmology.« less

  2. Class relations and all-cause mortality: a test of Wright's social class scheme using the Barcelona 2000 Health Interview Survey.

    PubMed

    Muntaner, Carles; Borrell, Carme; Solà, Judit; Marí-Dell'Olmo, Marc; Chung, Haejoo; Rodríguez-Sanz, Maica; Benach, Joan; Rocha, Kátia B; Ng, Edwin

    2011-01-01

    The aim of this study is to test the effects of neo-Marxian social class and potential mediators such as labor market position, work organization, material deprivation, and health behaviors on all-cause mortality. The authors use longitudinal data from the Barcelona 2000 Health Interview Survey (N=7526), with follow-up interviews through the municipal census in 2008 (95.97% response rate). Using data on relations of property, organizational power, and education, the study groups social classes according to Wright's scheme: capitalists, petit bourgeoisie, managers, supervisors, and skilled, semi-skilled, and unskilled workers. Findings indicate that social class, measured as relations of control over productive assets, is an important predictor of mortality among working-class men but not women. Workers (hazard ratio = 1.60; 95% confidence interval, 1.10-2.35) but also managers and small employers had a higher risk of death compared with capitalists. The extensive use of conventional gradient measures of social stratification has neglected sociological measures of social class conceptualized as relations of control over productive assets. This concept is capable of explaining how social inequalities are generated. To confirm the protective effect of the capitalist class position and the "contradictory class location hypothesis," additional efforts are needed to properly measure class among low-level supervisors, capitalists, managers, and small employers.

  3. Explorations in statistics: hypothesis tests and P values.

    PubMed

    Curran-Everett, Douglas

    2009-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.

  4. Applying Geospatial Techniques to Investigate Boundary Layer Land-Atmosphere Interactions Involved in Tornadogensis

    NASA Astrophysics Data System (ADS)

    Weigel, A. M.; Griffin, R.; Knupp, K. R.; Molthan, A.; Coleman, T.

    2017-12-01

    Northern Alabama is among the most tornado-prone regions in the United States. This region has a higher degree of spatial variability in both terrain and land cover than the more frequently studied North American Great Plains region due to its proximity to the southern Appalachian Mountains and Cumberland Plateau. More research is needed to understand North Alabama's high tornado frequency and how land surface heterogeneity influences tornadogenesis in the boundary layer. Several modeling and simulation studies stretching back to the 1970's have found that variations in the land surface induce tornadic-like flow near the surface, illustrating a need for further investigation. This presentation introduces research investigating the hypothesis that horizontal gradients in land surface roughness, normal to the direction of flow in the boundary layer, induce vertically oriented vorticity at the surface that can potentially aid in tornadogenesis. A novel approach was implemented to test this hypothesis using a GIS-based quadrant pattern analysis method. This method was developed to quantify spatial relationships and patterns between horizontal variations in land surface roughness and locations of tornadogenesis. Land surface roughness was modeled using the Noah land surface model parameterization scheme which, was applied to MODIS 500 m and Landsat 30 m data in order to compare the relationship between tornadogenesis locations and roughness gradients at different spatial scales. This analysis found a statistical relationship between areas of higher roughness located normal to flow surrounding tornadogenesis locations that supports the tested hypothesis. In this presentation, the innovative use of satellite remote sensing data and GIS technologies to address interactions between the land and atmosphere will be highlighted.

  5. Leveraging cell type specific regulatory regions to detect SNPs associated with tissue factor pathway inhibitor plasma levels.

    PubMed

    Dennis, Jessica; Medina-Rivera, Alejandra; Truong, Vinh; Antounians, Lina; Zwingerman, Nora; Carrasco, Giovana; Strug, Lisa; Wells, Phil; Trégouët, David-Alexandre; Morange, Pierre-Emmanuel; Wilson, Michael D; Gagnon, France

    2017-07-01

    Tissue factor pathway inhibitor (TFPI) regulates the formation of intravascular blood clots, which manifest clinically as ischemic heart disease, ischemic stroke, and venous thromboembolism (VTE). TFPI plasma levels are heritable, but the genetics underlying TFPI plasma level variability are poorly understood. Herein we report the first genome-wide association scan (GWAS) of TFPI plasma levels, conducted in 251 individuals from five extended French-Canadian Families ascertained on VTE. To improve discovery, we also applied a hypothesis-driven (HD) GWAS approach that prioritized single nucleotide polymorphisms (SNPs) in (1) hemostasis pathway genes, and (2) vascular endothelial cell (EC) regulatory regions, which are among the highest expressers of TFPI. Our GWAS identified 131 SNPs with suggestive evidence of association (P-value < 5 × 10 -8 ), but no SNPs reached the genome-wide threshold for statistical significance. Hemostasis pathway genes were not enriched for TFPI plasma level associated SNPs (global hypothesis test P-value = 0.147), but EC regulatory regions contained more TFPI plasma level associated SNPs than expected by chance (global hypothesis test P-value = 0.046). We therefore stratified our genome-wide SNPs, prioritizing those in EC regulatory regions via stratified false discovery rate (sFDR) control, and reranked the SNPs by q-value. The minimum q-value was 0.27, and the top-ranked SNPs did not show association evidence in the MARTHA replication sample of 1,033 unrelated VTE cases. Although this study did not result in new loci for TFPI, our work lays out a strategy to utilize epigenomic data in prioritization schemes for future GWAS studies. © 2017 WILEY PERIODICALS, INC.

  6. Velocity-based planning of rapid elbow movements expands the control scheme of the equilibrium point hypothesis.

    PubMed

    Suzuki, Masataka; Yamazaki, Yoshihiko

    2005-01-01

    According to the equilibrium point hypothesis of voluntary motor control, control action of muscles is not explicitly computed, but rather arises as a consequence of interaction between moving equilibrium position, current kinematics and stiffness of the joint. This approach is attractive as it obviates the need to explicitly specify the forces controlling limb movements. However, many debatable aspects of this hypothesis remain in the manner of specification of the equilibrium point trajectory and muscle activation (or its stiffness), which elicits a restoring force toward the planned equilibrium trajectory. In this study, we expanded the framework of this hypothesis by assuming that the control system uses the velocity measure as the origin of subordinate variables scaling descending commands. The velocity command is translated into muscle control inputs by second order pattern generators, which yield reciprocal command and coactivation commands, and create alternating activation of the antagonistic muscles during movement and coactivation in the post-movement phase, respectively. The velocity command is also integrated to give a position command specifying a moving equilibrium point. This model is purely kinematics-dependent, since the descending commands needed to modulate the visco-elasticity of muscles are implicitly given by simple parametric specifications of the velocity command alone. The simulated movements of fast elbow single-joint movements corresponded well with measured data performed over a wide range of movement distances, in terms of both muscle excitations and kinematics. Our proposal on a synthesis for the equilibrium point approach and velocity command, may offer some insights into the control scheme of the single-joint arm movements.

  7. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  8. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  9. Examination of Spectral Transformations on Spectral Mixture Analysis

    NASA Astrophysics Data System (ADS)

    Deng, Y.; Wu, C.

    2018-04-01

    While many spectral transformation techniques have been applied on spectral mixture analysis (SMA), few study examined their necessity and applicability. This paper focused on exploring the difference between spectrally transformed schemes and untransformed scheme to find out which transformed scheme performed better in SMA. In particular, nine spectrally transformed schemes as well as untransformed scheme were examined in two study areas. Each transformed scheme was tested 100 times using different endmember classes' spectra under the endmember model of vegetation- high albedo impervious surface area-low albedo impervious surface area-soil (V-ISAh-ISAl-S). Performance of each scheme was assessed based on mean absolute error (MAE). Statistical analysis technique, Paired-Samples T test, was applied to test the significance of mean MAEs' difference between transformed and untransformed schemes. Results demonstrated that only NSMA could exceed the untransformed scheme in all study areas. Some transformed schemes showed unstable performance since they outperformed the untransformed scheme in one area but weakened the SMA result in another region.

  10. Genetic and economic evaluation of Japanese Black (Wagyu) cattle breeding schemes.

    PubMed

    Kahi, A K; Hirooka, H

    2005-09-01

    Deterministic simulation was used to evaluate 10 breeding schemes for genetic gain and profitability and in the context of maximizing returns from investment in Japanese Black cattle breeding. A breeding objective that integrated the cow-calf and feedlot segments was considered. Ten breeding schemes that differed in the records available for use as selection criteria were defined. The schemes ranged from one that used carcass traits currently available to Japanese Black cattle breeders (Scheme 1) to one that also included linear measurements and male and female reproduction traits (Scheme 10). The latter scheme represented the highest level of performance recording. In all breeding schemes, sires were chosen from the proportion selected during the first selection stage (performance testing), modeling a two-stage selection process. The effect on genetic gain and profitability of varying test capacity and number of progeny per sire and of ultrasound scanning of live animals was examined for all breeding schemes. Breeding schemes that selected young bulls during performance testing based on additional individual traits and information on carcass traits from their relatives generated additional genetic gain and profitability. Increasing test capacity resulted in an increase in genetic gain in all schemes. Profitability was optimal in Scheme 2 (a scheme similar to Scheme 1, but selection of young bulls also was based on information on carcass traits from their relatives) to 10 when 900 to 1,000 places were available for performance testing. Similarly, as the number of progeny used in the selection of sires increased, genetic gain first increased sharply and then gradually in all schemes. Profit was optimal across all breeding schemes when sires were selected based on information from 150 to 200 progeny. Additional genetic gain and profitability were generated in each breeding scheme with ultrasound scanning of live animals for carcass traits. Ultrasound scanning of live animals was more important than the addition of any other traits in the selection criteria. These results may be used to provide guidance to Japanese Black cattle breeders.

  11. The θ-γ neural code.

    PubMed

    Lisman, John E; Jensen, Ole

    2013-03-20

    Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Auditory Spatial Attention Representations in the Human Cerebral Cortex

    PubMed Central

    Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.

    2014-01-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  13. Seeking health information on the web: positive hypothesis testing.

    PubMed

    Kayhan, Varol Onur

    2013-04-01

    The goal of this study is to investigate positive hypothesis testing among consumers of health information when they search the Web. After demonstrating the extent of positive hypothesis testing using Experiment 1, we conduct Experiment 2 to test the effectiveness of two debiasing techniques. A total of 60 undergraduate students searched a tightly controlled online database developed by the authors to test the validity of a hypothesis. The database had four abstracts that confirmed the hypothesis and three abstracts that disconfirmed it. Findings of Experiment 1 showed that majority of participants (85%) exhibited positive hypothesis testing. In Experiment 2, we found that the recommendation technique was not effective in reducing positive hypothesis testing since none of the participants assigned to this server could retrieve disconfirming evidence. Experiment 2 also showed that the incorporation technique successfully reduced positive hypothesis testing since 75% of the participants could retrieve disconfirming evidence. Positive hypothesis testing on the Web is an understudied topic. More studies are needed to validate the effectiveness of the debiasing techniques discussed in this study and develop new techniques. Search engine developers should consider developing new options for users so that both confirming and disconfirming evidence can be presented in search results as users test hypotheses using search engines. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Modern views on the composition of anionic oxy-fluoride complexes of aluminium and their rearrangement during the electrolysis of cryolite-alumina melts

    NASA Astrophysics Data System (ADS)

    Khramov, A. P.; Shurov, N. I.

    2014-08-01

    Some consequences of the hypothesis of the absence of free F- ions in cryolite-alumina melts are observed. The melt at 1 < CR < 3 is assumed to consist of the complexes AlF{6/3-}, AlF{5/2-}, AlF{4/-}, Al2OF{6/2-}, and Al2O2F{4/2-}, and alkali metal cations. A formal-stoichiometric study of the processes occurring during electrolysis is performed on the basis of the accepted hypothesis. Judgments about some of the features of the electrode reactions and chemical reactions in the electrolyte volume are presented. The reaction schemes for the instances with and without the subsequent/preceding chemical reaction near the electrode or in the molten salt volume are given. The mass flows of various forms of ionic complexes through the electrolyte volume are given for these schemes. Definitive conclusions are not made in the study, but the range of possible variants for the electrochemical routes of the overall chemical reaction in the cell is limited.

  15. Evolution of the Z-scheme of photosynthesis: a perspective.

    PubMed

    Govindjee; Shevela, Dmitriy; Björn, Lars Olof

    2017-09-01

    The concept of the Z-scheme of oxygenic photosynthesis is in all the textbooks. However, its evolution is not. We focus here mainly on some of the history of its biophysical aspects. We have arbitrarily divided here the 1941-2016 period into three sub-periods: (a) Origin of the concept of two light reactions: first hinted at, in 1941, by James Franck and Karl Herzfeld; described and explained, in 1945, by Eugene Rabinowitch; and a clear hypothesis, given in 1956 by Rabinowitch, of the then available cytochrome experiments: one light oxidizing it and another reducing it; (b) Experimental discovery of the two light reactions and two pigment systems and the Z-scheme of photosynthesis: Robert Emerson's discovery, in 1957, of enhancement in photosynthesis when two light beams (one in the far-red region, and the other of shorter wavelengths) are given together than when given separately; and the 1960 scheme of Robin Hill & Fay Bendall; and (c) Evolution of the many versions of the Z-Scheme: Louis Duysens and Jan Amesz's 1961 experiments on oxidation and reduction of cytochrome f by two different wavelengths of light, followed by the work of many others for more than 50 years.

  16. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  17. A sigmoidal model for biosorption of heavy metal cations from aqueous media.

    PubMed

    Özen, Rümeysa; Sayar, Nihat Alpagu; Durmaz-Sam, Selcen; Sayar, Ahmet Alp

    2015-07-01

    A novel multi-input single output (MISO) black-box sigmoid model is developed to simulate the biosorption of heavy metal cations by the fission yeast from aqueous medium. Validation and verification of the model is done through statistical chi-squared hypothesis tests and the model is evaluated by uncertainty and sensitivity analyses. The simulated results are in agreement with the data of the studied system in which Schizosaccharomyces pombe biosorbs Ni(II) cations at various process conditions. Experimental data is obtained originally for this work using dead cells of an adapted variant of S. Pombe and represented by Freundlich isotherms. A process optimization scheme is proposed using the present model to build a novel application of a cost-merit objective function which would be useful to predict optimal operation conditions. Copyright © 2015. Published by Elsevier Inc.

  18. Impact of subsidies on cancer genetic testing uptake in Singapore.

    PubMed

    Li, Shao-Tzu; Yuen, Jeanette; Zhou, Ke; Binte Ishak, Nur Diana; Chen, Yanni; Met-Domestici, Marie; Chan, Sock Hoai; Tan, Yee Pin; Allen, John Carson; Lim, Soon Thye; Soo, Khee Chee; Ngeow, Joanne

    2017-04-01

    Previous reports cite high costs of clinical cancer genetic testing as main barriers to patient's willingness to test. We report findings of a pilot study that evaluates how different subsidy schemes impact genetic testing uptake and total cost of cancer management. We included all patients who attended the Cancer Genetics Service at the National Cancer Centre Singapore (January 2014-May 2016). Two subsidy schemes, the blanket scheme (100% subsidy to all eligible patients), and the varied scheme (patients received 50%-100% subsidy dependent on financial status) were compared. We estimated total spending on cancer management from government's perspective using a decision model. 445 patients were included. Contrasting against the blanket scheme, the varied scheme observed a higher attendance of patients (34 vs 8 patients per month), of which a higher proportion underwent genetic testing (5% vs 38%), while lowering subsidy spending per person (S$1098 vs S$1161). The varied scheme may potentially save cost by reducing unnecessary cancer surveillance when first-degree relatives uptake rate is above 36%. Provision of subsidy leads to a considerable increase in genetic testing uptake rate. From the government's perspective, subsidising genetic testing may potentially reduce total costs on cancer management. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. Lowering plasma 1-deoxysphingolipids improves neuropathy in diabetic rats.

    PubMed

    Othman, Alaa; Bianchi, Roberto; Alecu, Irina; Wei, Yu; Porretta-Serapiglia, Carla; Lombardi, Raffaella; Chiorazzi, Alessia; Meregalli, Cristina; Oggioni, Norberto; Cavaletti, Guido; Lauria, Giuseppe; von Eckardstein, Arnold; Hornemann, Thorsten

    2015-03-01

    1-Deoxysphingolipids (1-deoxySLs) are atypical neurotoxic sphingolipids that are formed by the serine-palmitoyltransferase (SPT). Pathologically elevated 1-deoxySL concentrations cause hereditary sensory and autonomic neuropathy type 1 (HSAN1), an axonal neuropathy associated with several missense mutations in SPT. Oral L-serine supplementation suppressed the formation of 1-deoxySLs in patients with HSAN1 and preserved nerve function in an HSAN1 mouse model. Because 1-deoxySLs also are elevated in patients with type 2 diabetes mellitus, L-serine supplementation could also be a therapeutic option for diabetic neuropathy (DN). This was tested in diabetic STZ rats in a preventive and therapeutic treatment scheme. Diabetic rats showed significantly increased plasma 1-deoxySL concentrations, and L-serine supplementation lowered 1-deoxySL concentrations in both treatment schemes (P < 0.0001). L-serine had no significant effect on hyperglycemia, body weight, or food intake. Mechanical sensitivity was significantly improved in the preventive (P < 0.01) and therapeutic schemes (P < 0.001). Nerve conduction velocity (NCV) significantly improved in only the preventive group (P < 0.05). Overall NCV showed a highly significant (P = 5.2E-12) inverse correlation with plasma 1-deoxySL concentrations. In summary, our data support the hypothesis that 1-deoxySLs are involved in the pathology of DN and that an oral L-serine supplementation could be a novel therapeutic option for treating DN. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  20. Applying a CAD-generated imaging marker to assess short-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Mirniaharikandehei, Seyedehnafiseh; Zarafshani, Ali; Heidari, Morteza; Wang, Yunzhi; Aghaei, Faranak; Zheng, Bin

    2018-02-01

    Although whether using computer-aided detection (CAD) helps improve radiologists' performance in reading and interpreting mammograms is controversy due to higher false-positive detection rates, objective of this study is to investigate and test a new hypothesis that CAD-generated false-positives, in particular, the bilateral summation of false-positives, is a potential imaging marker associated with short-term breast cancer risk. An image dataset involving negative screening mammograms acquired from 1,044 women was retrospectively assembled. Each case involves 4 images of craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breasts. In the next subsequent mammography screening, 402 cases were positive for cancer detected and 642 remained negative. A CAD scheme was applied to process all "prior" negative mammograms. Some features from CAD scheme were extracted, which include detection seeds, the total number of false-positive regions, an average of detection scores and the sum of detection scores in CC and MLO view images. Then the features computed from two bilateral images of left and right breasts from either CC or MLO view were combined. In order to predict the likelihood of each testing case being positive in the next subsequent screening, two logistic regression models were trained and tested using a leave-one-case-out based cross-validation method. Data analysis demonstrated the maximum prediction accuracy with an area under a ROC curve of AUC=0.65+/-0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of [2.95, 6.83]. The results also illustrated an increasing trend in the adjusted odds ratio and risk prediction scores (p<0.01). Thus, the study showed that CAD-generated false-positives might provide a new quantitative imaging marker to help assess short-term breast cancer risk.

  1. Hypothesis testing in hydrology: Theory and practice

    NASA Astrophysics Data System (ADS)

    Kirchner, James; Pfister, Laurent

    2017-04-01

    Well-posed hypothesis tests have spurred major advances in hydrological theory. However, a random sample of recent research papers suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias - the tendency to value and trust confirmations more than refutations - among both researchers and reviewers. Hypothesis testing is not the only recipe for scientific progress, however: exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.

  2. A computationally efficient scheme for the non-linear diffusion equation

    NASA Astrophysics Data System (ADS)

    Termonia, P.; Van de Vyver, H.

    2009-04-01

    This Letter proposes a new numerical scheme for integrating the non-linear diffusion equation. It is shown that it is linearly stable. Some tests are presented comparing this scheme to a popular decentered version of the linearized Crank-Nicholson scheme, showing that, although this scheme is slightly less accurate in treating the highly resolved waves, (i) the new scheme better treats highly non-linear systems, (ii) better handles the short waves, (iii) for a given test bed turns out to be three to four times more computationally cheap, and (iv) is easier in implementation.

  3. Testing the null hypothesis: the forgotten legacy of Karl Popper?

    PubMed

    Wilkinson, Mick

    2013-01-01

    Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.

  4. A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.

    ERIC Educational Resources Information Center

    Liu, Tung; Stone, Courtenay C.

    1999-01-01

    Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…

  5. H2 as a Possible Carrier of the DIBs?

    NASA Astrophysics Data System (ADS)

    Ubachs, W.

    2014-02-01

    In the 1990s the hydrogen molecule, by far the most abundant molecular species in the interstellar medium, has been proposed as a possible carrier of the diffuse interstellar bands. While some remarkable coincidences were found in the rich spectrum of inter-Rydberg transitions of this molecule with DIB-features, both in frequency position as in linewidth, some open issues remained on a required non-linear optical pumping scheme that should explain the population of certain intermediate levels and act as a selection mechanism. Recently a similar scheme has been proposed relating the occurrence of the UV-bump (the ubiquitous 2170 Å extinction feature) to the spectrum of H2, therewith reviving the H2 hypothesis.

  6. Learning-automaton-based online discovery and tracking of spatiotemporal event patterns.

    PubMed

    Yazidi, Anis; Granmo, Ole-Christoffer; Oommen, B John

    2013-06-01

    Discovering and tracking of spatiotemporal patterns in noisy sequences of events are difficult tasks that have become increasingly pertinent due to recent advances in ubiquitous computing, such as community-based social networking applications. The core activities for applications of this class include the sharing and notification of events, and the importance and usefulness of these functionalities increase as event sharing expands into larger areas of one's life. Ironically, instead of being helpful, an excessive number of event notifications can quickly render the functionality of event sharing to be obtrusive. Indeed, any notification of events that provides redundant information to the application/user can be seen to be an unnecessary distraction. In this paper, we introduce a new scheme for discovering and tracking noisy spatiotemporal event patterns, with the purpose of suppressing reoccurring patterns, while discerning novel events. Our scheme is based on maintaining a collection of hypotheses, each one conjecturing a specific spatiotemporal event pattern. A dedicated learning automaton (LA)--the spatiotemporal pattern LA (STPLA)--is associated with each hypothesis. By processing events as they unfold, we attempt to infer the correctness of each hypothesis through a real-time guided random walk. Consequently, the scheme that we present is computationally efficient, with a minimal memory footprint. Furthermore, it is ergodic, allowing adaptation. Empirical results involving extensive simulations demonstrate the superior convergence and adaptation speed of STPLA, as well as an ability to operate successfully with noise, including both the erroneous inclusion and omission of events. An empirical comparison study was performed and confirms the superiority of our scheme compared to a similar state-of-the-art approach. In particular, the robustness of the STPLA to inclusion as well as to omission noise constitutes a unique property compared to other related approaches. In addition, the results included, which involve the so-called " presence sharing" application, are both promising and, in our opinion, impressive. It is thus our opinion that the proposed STPLA scheme is, in general, ideal for improving the usefulness of event notification and sharing systems, since it is capable of significantly, robustly, and adaptively suppressing redundant information.

  7. Simulation numerique de l'effet du reflecteur radial sur les cellules rep en utilisant les codes DRAGON et DONJON

    NASA Astrophysics Data System (ADS)

    Bejaoui, Najoua

    The pressurized water nuclear reactors (PWRs) is the largest fleet of nuclear reactors in operation around the world. Although these reactors have been studied extensively by designers and operators using efficient numerical methods, there are still some calculation weaknesses, given the geometric complexity of the core, still unresolved such as the analysis of the neutron flux's behavior at the core-reflector interface. The standard calculation scheme is a two steps process. In the first step, a detailed calculation at the assembly level with reflective boundary conditions, provides homogenized cross-sections for the assemblies, condensed to a reduced number of groups; this step is called the lattice calculation. The second step uses homogenized properties in each assemblies to calculate reactor properties at the core level. This step is called the full-core calculation or whole-core calculation. This decoupling of the two calculation steps is the origin of methodological bias particularly at the interface core reflector: the periodicity hypothesis used to calculate cross section librairies becomes less pertinent for assemblies that are adjacent to the reflector generally represented by these two models: thus the introduction of equivalent reflector or albedo matrices. The reflector helps to slowdown neutrons leaving the reactor and returning them to the core. This effect leads to two fission peaks in fuel assemblies localised at the core/reflector interface, the fission rate increasing due to the greater proportion of reentrant neutrons. This change in the neutron spectrum arises deep inside the fuel located on the outskirts of the core. To remedy this we simulated a peripheral assembly reflected with TMI-PWR reflector and developed an advanced calculation scheme that takes into account the environment of the peripheral assemblies and generate equivalent neutronic properties for the reflector. This scheme is tested on a core without control mechanisms and charged with fresh fuel. The results of this study showed that explicit representation of reflector and calculation of peripheral assembly with our advanced scheme allow corrections to the energy spectrum at the core interface and increase the peripheral power by up to 12% compared with that of the reference scheme.

  8. 2D granular flows with the μ(I) rheology and side walls friction: A well-balanced multilayer discretization

    NASA Astrophysics Data System (ADS)

    Fernández-Nieto, E. D.; Garres-Díaz, J.; Mangeney, A.; Narbona-Reina, G.

    2018-03-01

    We present here numerical modelling of granular flows with the μ (I) rheology in confined channels. The contribution is twofold: (i) a model to approximate the Navier-Stokes equations with the μ (I) rheology through an asymptotic analysis; under the hypothesis of a one-dimensional flow, this model takes into account side walls friction; (ii) a multilayer discretization following Fernández-Nieto et al. (2016) [20]. In this new numerical scheme, we propose an appropriate treatment of the rheological terms through a hydrostatic reconstruction which allows this scheme to be well-balanced and therefore to deal with dry areas. Based on academic tests, we first evaluate the influence of the width of the channel on the normal profiles of the downslope velocity thanks to the multilayer approach that is intrinsically able to describe changes from Bagnold to S-shaped (and vice versa) velocity profiles. We also check the well-balanced property of the proposed numerical scheme. We show that approximating side walls friction using single-layer models may lead to strong errors. Secondly, we compare the numerical results with experimental data on granular collapses. We show that the proposed scheme allows us to qualitatively reproduce the deposit in the case of a rigid bed (i.e. dry area) and that the error made by replacing the dry area by a small layer of material may be large if this layer is not thin enough. The proposed model is also able to reproduce the time evolution of the free surface and of the flow/no-flow interface. In addition, it reproduces the effect of erosion for granular flows over initially static material lying on the bed. This is possible when using a variable friction coefficient μ (I) but not with a constant friction coefficient.

  9. P value and the theory of hypothesis testing: an explanation for new researchers.

    PubMed

    Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël

    2010-03-01

    In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

  10. Circulation Type Classifications and their nexus to Van Bebber's storm track Vb

    NASA Astrophysics Data System (ADS)

    Hofstätter, M.; Chimani, B.

    2012-04-01

    Circulation Type Classifications (CTCs) are tools to identify repetitive and predominantly stationary patterns of the atmospheric circulation over a certain area, with the purpose to enable the recognition of specific characteristics in surface climate variables. On the other hand storm tracks can be used to identify similar types of synoptic events from a non-stationary, kinematic perspective. Such a storm track classification for Europe has been done in the late 19th century by Van Bebber (1882, 1891), from which the famous type Vb and Vc/d remained up to the present day because of to their association with major flooding events like in August 2002 in Europe. In this work a systematic tracking procedure has been developed, to determine storm track types and their characteristics especially for the Eastern Alpine Region in the period 1961-2002, using ERA40 and ERAinterim reanalysis. The focus thereby is on cyclone tracks of type V as suggested by van Bebber and congeneric types. This new catalogue is used as a reference to verify the hypothesis of a certain coherence of storm track Vb with certain circulation types (e.g. Fricke and Kaminski, 2002). Selected objective and subjective classification schemes from the COST733 action (http://cost733.met.no/, Phillip et al. 2010) are used therefore, as well as the manual classification from ZAMG (Lauscher 1972 and 1985), in which storm track Vb has been classified explicitly on a daily base since 1948. The latter scheme should prove itself as a valuable and unique data source in that issue. Results show that not less than 146 storm tracks are identified as Vb between 1961 and 2002, whereas only three events could be found from literature, pointing to big subjectivity and preconception in the issue of Vb storm tracks. The annual number of Vb storm tracks do not show any significant trend over the last 42 years, but large variations from year to year. Circulation type classification CAP27 (Cluster Analysis of Principal Components) is the best performing, fully objective scheme tested herein, showing the power to discriminate Vb events. Most of the other fully objective schemes do by far not perform as well. Largest skill in that issue can be seen from the subjective/manual CTCs, proving themselves to enhance relevant synoptic phenomena instead of emphasizing mathematic criteria in the classification. The hypothesis of Fricke and Kaminsky can definitely be supported by this work: Vb storm tracks are included in one or the other stationary circulation pattern, but to which extent depends on the specific characteristics of the CTC in question.

  11. Debates—Hypothesis testing in hydrology: Theory and practice

    NASA Astrophysics Data System (ADS)

    Pfister, Laurent; Kirchner, James W.

    2017-03-01

    The basic structure of the scientific method—at least in its idealized form—is widely championed as a recipe for scientific progress, but the day-to-day practice may be different. Here, we explore the spectrum of current practice in hypothesis formulation and testing in hydrology, based on a random sample of recent research papers. This analysis suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias—the tendency to value and trust confirmations more than refutations—among both researchers and reviewers. Nonetheless, as several examples illustrate, hypothesis tests have played an essential role in spurring major advances in hydrological theory. Hypothesis testing is not the only recipe for scientific progress, however. Exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.

  12. Bayesian inference for psychology. Part II: Example applications with JASP.

    PubMed

    Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D

    2018-02-01

    Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

  13. Eye movement sequence generation in humans: Motor or goal updating?

    PubMed Central

    Quaia, Christian; Joiner, Wilsaan M.; FitzGibbon, Edmond J.; Optican, Lance M.; Smith, Maurice A.

    2011-01-01

    Saccadic eye movements are often grouped in pre-programmed sequences. The mechanism underlying the generation of each saccade in a sequence is currently poorly understood. Broadly speaking, two alternative schemes are possible: first, after each saccade the retinotopic location of the next target could be estimated, and an appropriate saccade could be generated. We call this the goal updating hypothesis. Alternatively, multiple motor plans could be pre-computed, and they could then be updated after each movement. We call this the motor updating hypothesis. We used McLaughlin’s intra-saccadic step paradigm to artificially create a condition under which these two hypotheses make discriminable predictions. We found that in human subjects, when sequences of two saccades are planned, the motor updating hypothesis predicts the landing position of the second saccade in two-saccade sequences much better than the goal updating hypothesis. This finding suggests that the human saccadic system is capable of executing sequences of saccades to multiple targets by planning multiple motor commands, which are then updated by serial subtraction of ongoing motor output. PMID:21191134

  14. Teaching Hypothesis Testing by Debunking a Demonstration of Telepathy.

    ERIC Educational Resources Information Center

    Bates, John A.

    1991-01-01

    Discusses a lesson designed to demonstrate hypothesis testing to introductory college psychology students. Explains that a psychology instructor demonstrated apparent psychic abilities to students. Reports that students attempted to explain the instructor's demonstrations through hypothesis testing and revision. Provides instructions on performing…

  15. Trends in hypothesis testing and related variables in nursing research: a retrospective exploratory study.

    PubMed

    Lash, Ayhan Aytekin; Plonczynski, Donna J; Sehdev, Amikar

    2011-01-01

    To compare the inclusion and the influences of selected variables on hypothesis testing during the 1980s and 1990s. In spite of the emphasis on conducting inquiry consistent with the tenets of logical positivism, there have been no studies investigating the frequency and patterns of hypothesis testing in nursing research The sample was obtained from the journal Nursing Research which was the research journal with the highest circulation during the study period under study. All quantitative studies published during the two decades including briefs and historical studies were included in the analyses A retrospective design was used to select the sample. Five years from the 1980s and 1990s each were randomly selected from the journal, Nursing Research. Of the 582 studies, 517 met inclusion criteria. Findings suggest that there has been a decline in the use of hypothesis testing in the last decades of the 20th century. Further research is needed to identify the factors that influence the conduction of research with hypothesis testing. Hypothesis testing in nursing research showed a steady decline from the 1980s to 1990s. Research purposes of explanation, and prediction/ control increased the likelihood of hypothesis testing. Hypothesis testing strengthens the quality of the quantitative studies, increases the generality of findings and provides dependable knowledge. This is particularly true for quantitative studies that aim to explore, explain and predict/control phenomena and/or test theories. The findings also have implications for doctoral programmes, research preparation of nurse-investigators, and theory testing.

  16. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  17. Designing experiments on thermal interactions by secondary-school students in a simulated laboratory environment

    NASA Astrophysics Data System (ADS)

    Lefkos, Ioannis; Psillos, Dimitris; Hatzikraniotis, Euripides

    2011-07-01

    Background and purpose: The aim of this study was to explore the effect of investigative activities with manipulations in a virtual laboratory on students' ability to design experiments. Sample Fourteen students in a lower secondary school in Greece attended a teaching sequence on thermal phenomena based on the use of information and communication technology, and specifically of the simulated virtual laboratory 'ThermoLab'. Design and methods A pre-post comparison was applied. Students' design of experiments was rated in eight dimensions; namely, hypothesis forming and verification, selection of variables, initial conditions, device settings, materials and devices used, process and phenomena description. A three-level ranking scheme was employed for the evaluation of students' answers in each dimension. Results A Wilcoxon signed-rank test revealed a statistically significant difference between the students' pre- and post-test scores. Additional analysis by comparing the pre- and post-test scores using the Hake gain showed high gains in all but one dimension, which suggests that this improvement was almost inclusive. Conclusions We consider that our findings support the statement that there was an improvement in students' ability to design experiments.

  18. A shift from significance test to hypothesis test through power analysis in medical research.

    PubMed

    Singh, G

    2006-01-01

    Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.

  19. Semantically enabled and statistically supported biological hypothesis testing with tissue microarray databases

    PubMed Central

    2011-01-01

    Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584

  20. Knowledge dimensions in hypothesis test problems

    NASA Astrophysics Data System (ADS)

    Krishnan, Saras; Idris, Noraini

    2012-05-01

    The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.

  1. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  2. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2010-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.

  3. Modeling and Identification for Vector Propulsion of an Unmanned Surface Vehicle: Three Degrees of Freedom Model and Response Model.

    PubMed

    Mu, Dongdong; Wang, Guofeng; Fan, Yunsheng; Sun, Xiaojie; Qiu, Bingbing

    2018-06-08

    This paper presents a complete scheme for research on the three degrees of freedom model and response model of the vector propulsion of an unmanned surface vehicle. The object of this paper is “Lanxin”, an unmanned surface vehicle (7.02 m × 2.6 m), which is equipped with a single vector propulsion device. First, the “Lanxin” unmanned surface vehicle and the related field experiments (turning test and zig-zag test) are introduced and experimental data are collected through various sensors. Then, the thrust of the vector thruster is estimated by the empirical formula method. Third, using the hypothesis and simplification, the three degrees of freedom model and the response model of USV are deduced and established, respectively. Fourth, the parameters of the models (three degrees of freedom model, response model and thruster servo model) are obtained by system identification, and we compare the simulated turning test and zig-zag test with the actual data to verify the accuracy of the identification results. Finally, the biggest advantage of this paper is that it combines theory with practice. Based on identified response model, simulation and practical course keeping experiments are carried out to further verify feasibility and correctness of modeling and identification.

  4. ON THE SUBJECT OF HYPOTHESIS TESTING

    PubMed Central

    Ugoni, Antony

    1993-01-01

    In this paper, the definition of a statistical hypothesis is discussed, and the considerations which need to be addressed when testing a hypothesis. In particular, the p-value, significance level, and power of a test are reviewed. Finally, the often quoted confidence interval is given a brief introduction. PMID:17989768

  5. Some consequences of using the Horsfall-Barratt scale for hypothesis testing

    USDA-ARS?s Scientific Manuscript database

    Comparing treatment effects by hypothesis testing is a common practice in plant pathology. Nearest percent estimates (NPEs) of disease severity were compared to Horsfall-Barratt (H-B) scale data to explore whether there was an effect of assessment method on hypothesis testing. A simulation model ba...

  6. Hypothesis Testing in Task-Based Interaction

    ERIC Educational Resources Information Center

    Choi, Yujeong; Kilpatrick, Cynthia

    2014-01-01

    Whereas studies show that comprehensible output facilitates L2 learning, hypothesis testing has received little attention in Second Language Acquisition (SLA). Following Shehadeh (2003), we focus on hypothesis testing episodes (HTEs) in which learners initiate repair of their own speech in interaction. In the context of a one-way information gap…

  7. Classroom-Based Strategies to Incorporate Hypothesis Testing in Functional Behavior Assessments

    ERIC Educational Resources Information Center

    Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.

    2017-01-01

    When results of descriptive functional behavior assessments are unclear, hypothesis testing can help school teams understand how the classroom environment affects a student's challenging behavior. This article describes two hypothesis testing strategies that can be used in classroom settings: structural analysis and functional analysis. For each…

  8. Hypothesis Testing in the Real World

    ERIC Educational Resources Information Center

    Miller, Jeff

    2017-01-01

    Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…

  9. Hadron physics through asymptotic SU(3) and the chiral SU(3) x SU(3) algebra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oneda, S.; Matsuda, S.; Perlmutter, A.

    From Coral Gables conference on fundamental interactions for theoretical studies; Coral Gables, Florida, USA (22 Jan 1973). See CONF-730124-. The inter- SU(3)-multiplet regularities and clues to a possible level scheme of hadrons are studied in a systematic way. The hypothesis of asymptotic SU(3) is made in the presence of GMO mass splittings with mixing, which allows information to be extracted from the chiral SU(3) x SU(3) charge algebras and from the exotic commutation relations. For the ground states the schemes obtained are compatible with those of the SU(6) x O(3) classification. Sum rules are obtained which recover most of themore » good results of SU(6). (LBS)« less

  10. A test data compression scheme based on irrational numbers stored coding.

    PubMed

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  11. Roles of Abductive Reasoning and Prior Belief in Children's Generation of Hypotheses about Pendulum Motion

    ERIC Educational Resources Information Center

    Kwon, Yong-Ju; Jeong, Jin-Su; Park, Yun-Bok

    2006-01-01

    The purpose of the present study was to test the hypothesis that student's abductive reasoning skills play an important role in the generation of hypotheses on pendulum motion tasks. To test the hypothesis, a hypothesis-generating test on pendulum motion, and a prior-belief test about pendulum motion were developed and administered to a sample of…

  12. Making Knowledge Delivery Failsafe: Adding Step Zero in Hypothesis Testing

    ERIC Educational Resources Information Center

    Pan, Xia; Zhou, Qiang

    2010-01-01

    Knowledge of statistical analysis is increasingly important for professionals in modern business. For example, hypothesis testing is one of the critical topics for quality managers and team workers in Six Sigma training programs. Delivering the knowledge of hypothesis testing effectively can be an important step for the incapable learners or…

  13. Testing of Hypothesis in Equivalence and Non Inferiority Trials-A Concept.

    PubMed

    Juneja, Atul; Aggarwal, Abha R; Adhikari, Tulsi; Pandey, Arvind

    2016-04-01

    Establishing the appropriate hypothesis is one of the important steps for carrying out the statistical tests/analysis. Its understanding is important for interpreting the results of statistical analysis. The current communication attempts to provide the concept of testing of hypothesis in non inferiority and equivalence trials, where the null hypothesis is just reverse of what is set up for conventional superiority trials. It is similarly looked for rejection for establishing the fact the researcher is intending to prove. It is important to mention that equivalence or non inferiority cannot be proved by accepting the null hypothesis of no difference. Hence, establishing the appropriate statistical hypothesis is extremely important to arrive at meaningful conclusion for the set objectives in research.

  14. Detection, isolation and diagnosability analysis of intermittent faults in stochastic systems

    NASA Astrophysics Data System (ADS)

    Yan, Rongyi; He, Xiao; Wang, Zidong; Zhou, D. H.

    2018-02-01

    Intermittent faults (IFs) have the properties of unpredictability, non-determinacy, inconsistency and repeatability, switching systems between faulty and healthy status. In this paper, the fault detection and isolation (FDI) problem of IFs in a class of linear stochastic systems is investigated. For the detection and isolation of IFs, it includes: (1) to detect all the appearing time and the disappearing time of an IF; (2) to detect each appearing (disappearing) time of the IF before the subsequent disappearing (appearing) time; (3) to determine where the IFs happen. Based on the outputs of the observers we designed, a novel set of residuals is constructed by using the sliding-time window technique, and two hypothesis tests are proposed to detect all the appearing time and disappearing time of IFs. The isolation problem of IFs is also considered. Furthermore, within a statistical framework, the definition of the diagnosability of IFs is proposed, and a sufficient condition is brought forward for the diagnosability of IFs. Quantitative performance analysis results for the false alarm rate and missing detection rate are discussed, and the influences of some key parameters of the proposed scheme on performance indices such as the false alarm rate and missing detection rate are analysed rigorously. The effectiveness of the proposed scheme is illustrated via a simulation example of an unmanned helicopter longitudinal control system.

  15. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  16. Approaches to informed consent for hypothesis-testing and hypothesis-generating clinical genomics research.

    PubMed

    Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G

    2012-10-10

    Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.

  17. An Exercise for Illustrating the Logic of Hypothesis Testing

    ERIC Educational Resources Information Center

    Lawton, Leigh

    2009-01-01

    Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…

  18. Hypothesis Testing, "p" Values, Confidence Intervals, Measures of Effect Size, and Bayesian Methods in Light of Modern Robust Techniques

    ERIC Educational Resources Information Center

    Wilcox, Rand R.; Serang, Sarfaraz

    2017-01-01

    The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…

  19. Anticipatory postural adjustments and anticipatory synergy adjustments: Preparing to a postural perturbation with predictable and unpredictable direction

    PubMed Central

    Piscitelli, Daniele; Falaki, Ali; Solnik, Stanislaw; Latash, Mark L.

    2016-01-01

    We explored two aspects of feed-forward postural control, anticipatory postural adjustments (APAs) and anticipatory synergy adjustments (ASAs) seen prior to self-triggered unloading with known and unknown direction of the perturbation. In particular, we tested two main hypotheses predicting contrasting changes in APAs and ASAs. The first hypothesis predicted no major changes in ASAs. The second hypothesis predicted delayed APAs with predominance of co-contraction patterns when perturbation direction was unknown. Healthy subjects stood on the force plate and help a bar with two loads acting in the forward and backward directions. They pressed a trigger that released one of the loads causing a postural perturbation. In different series, the direction of the perturbation was either known (the same load released in all trials) or unknown (the subjects did not know which of the two loads would be released). Surface electromyograms were recorded and used to quantify APAs, synergies stabilizing center of pressure coordinate (within the uncontrolled manifold hypothesis), and ASA. APAs and ASAs were seen in all conditions. APAs were delayed and predominance of co-contraction patterns was seen under the conditions with unpredictable direction of perturbation. In contrast, no significant changes in synergies and ASAs were seen. Overall, these results show that feed-forward control of vertical posture has two distinct components, reflected in APAs and ASAs, which show qualitatively different adjustments with changes in predictability of the direction of perturbation. These results are interpreted within the recently proposed hierarchical scheme of the synergic control of motor tasks. The observations underscore the complexity of the feed-forward postural control, which involves separate changes in salient performance variables (such as coordinate of the center of pressure) and in their stability properties. PMID:27866261

  20. Anticipatory postural adjustments and anticipatory synergy adjustments: preparing to a postural perturbation with predictable and unpredictable direction.

    PubMed

    Piscitelli, Daniele; Falaki, Ali; Solnik, Stanislaw; Latash, Mark L

    2017-03-01

    We explored two aspects of feed-forward postural control, anticipatory postural adjustments (APAs) and anticipatory synergy adjustments (ASAs) seen prior to self-triggered unloading with known and unknown direction of the perturbation. In particular, we tested two main hypotheses predicting contrasting changes in APAs and ASAs. The first hypothesis predicted no major changes in ASAs. The second hypothesis predicted delayed APAs with predominance of co-contraction patterns when perturbation direction was unknown. Healthy subjects stood on the force plate and held a bar with two loads acting in the forward and backward directions. They pressed a trigger that released one of the loads causing a postural perturbation. In different series, the direction of the perturbation was either known (the same load released in all trials) or unknown (the subjects did not know which of the two loads would be released). Surface electromyograms were recorded and used to quantify APAs, synergies stabilizing center of pressure coordinate (within the uncontrolled manifold hypothesis), and ASA. APAs and ASAs were seen in all conditions. APAs were delayed, and predominance of co-contraction patterns was seen under the conditions with unpredictable direction of perturbation. In contrast, no significant changes in synergies and ASAs were seen. Overall, these results show that feed-forward control of vertical posture has two distinct components, reflected in APAs and ASAs, which show qualitatively different adjustments with changes in predictability of the direction of perturbation. These results are interpreted within the recently proposed hierarchical scheme of the synergic control of motor tasks. The observations underscore the complexity of the feed-forward postural control, which involves separate changes in salient performance variables (such as coordinate of the center of pressure) and in their stability properties.

  1. Hypothesis Testing Using Spatially Dependent Heavy Tailed Multisensor Data

    DTIC Science & Technology

    2014-12-01

    Office of Research 113 Bowne Hall Syracuse, NY 13244 -1200 ABSTRACT HYPOTHESIS TESTING USING SPATIALLY DEPENDENT HEAVY-TAILED MULTISENSOR DATA Report...consistent with the null hypothesis of linearity and can be used to estimate the distribution of a test statistic that can discrimi- nate between the null... Test for nonlinearity. Histogram is generated using the surrogate data. The statistic of the original time series is represented by the solid line

  2. Numerical solution of special ultra-relativistic Euler equations using central upwind scheme

    NASA Astrophysics Data System (ADS)

    Ghaffar, Tayabia; Yousaf, Muhammad; Qamar, Shamsul

    2018-06-01

    This article is concerned with the numerical approximation of one and two-dimensional special ultra-relativistic Euler equations. The governing equations are coupled first-order nonlinear hyperbolic partial differential equations. These equations describe perfect fluid flow in terms of the particle density, the four-velocity and the pressure. A high-resolution shock-capturing central upwind scheme is employed to solve the model equations. To avoid excessive numerical diffusion, the considered scheme avails the specific information of local propagation speeds. By using Runge-Kutta time stepping method and MUSCL-type initial reconstruction, we have obtained 2nd order accuracy of the proposed scheme. After discussing the model equations and the numerical technique, several 1D and 2D test problems are investigated. For all the numerical test cases, our proposed scheme demonstrates very good agreement with the results obtained by well-established algorithms, even in the case of highly relativistic 2D test problems. For validation and comparison, the staggered central scheme and the kinetic flux-vector splitting (KFVS) method are also implemented to the same model. The robustness and efficiency of central upwind scheme is demonstrated by the numerical results.

  3. The role of responsibility and fear of guilt in hypothesis-testing.

    PubMed

    Mancini, Francesco; Gangemi, Amelia

    2006-12-01

    Recent theories argue that both perceived responsibility and fear of guilt increase obsessive-like behaviours. We propose that hypothesis-testing might account for this effect. Both perceived responsibility and fear of guilt would influence subjects' hypothesis-testing, by inducing a prudential style. This style implies focusing on and confirming the worst hypothesis, and reiterating the testing process. In our experiment, we manipulated the responsibility and fear of guilt of 236 normal volunteers who executed a deductive task. The results show that perceived responsibility is the main factor that influenced individuals' hypothesis-testing. Fear of guilt has however a significant additive effect. Guilt-fearing participants preferred to carry on with the diagnostic process, even when faced with initial favourable evidence, whereas participants in the responsibility condition only did so when confronted with an unfavourable evidence. Implications for the understanding of obsessive-compulsive disorder (OCD) are discussed.

  4. Stratified exact tests for the weak causal null hypothesis in randomized trials with a binary outcome.

    PubMed

    Chiba, Yasutaka

    2017-09-01

    Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Initial study of Schroedinger eigenmaps for spectral target detection

    NASA Astrophysics Data System (ADS)

    Dorado-Munoz, Leidy P.; Messinger, David W.

    2016-08-01

    Spectral target detection refers to the process of searching for a specific material with a known spectrum over a large area containing materials with different spectral signatures. Traditional target detection methods in hyperspectral imagery (HSI) require assuming the data fit some statistical or geometric models and based on the model, to estimate parameters for defining a hypothesis test, where one class (i.e., target class) is chosen over the other classes (i.e., background class). Nonlinear manifold learning methods such as Laplacian eigenmaps (LE) have extensively shown their potential use in HSI processing, specifically in classification or segmentation. Recently, Schroedinger eigenmaps (SE), which is built upon LE, has been introduced as a semisupervised classification method. In SE, the former Laplacian operator is replaced by the Schroedinger operator. The Schroedinger operator includes by definition, a potential term V that steers the transformation in certain directions improving the separability between classes. In this regard, we propose a methodology for target detection that is not based on the traditional schemes and that does not need the estimation of statistical or geometric parameters. This method is based on SE, where the potential term V is taken into consideration to include the prior knowledge about the target class and use it to steer the transformation in directions where the target location in the new space is known and the separability between target and background is augmented. An initial study of how SE can be used in a target detection scheme for HSI is shown here. In-scene pixel and spectral signature detection approaches are presented. The HSI data used comprise various target panels for testing simultaneous detection of multiple objects with different complexities.

  6. Protostellar hydrodynamics: Constructing and testing a spacially and temporally second-order accurate method. 2: Cartesian coordinates

    NASA Technical Reports Server (NTRS)

    Myhill, Elizabeth A.; Boss, Alan P.

    1993-01-01

    In Boss & Myhill (1992) we described the derivation and testing of a spherical coordinate-based scheme for solving the hydrodynamic equations governing the gravitational collapse of nonisothermal, nonmagnetic, inviscid, radiative, three-dimensional protostellar clouds. Here we discuss a Cartesian coordinate-based scheme based on the same set of hydrodynamic equations. As with the spherical coorrdinate-based code, the Cartesian coordinate-based scheme employs explicit Eulerian methods which are both spatially and temporally second-order accurate. We begin by describing the hydrodynamic equations in Cartesian coordinates and the numerical methods used in this particular code. Following Finn & Hawley (1989), we pay special attention to the proper implementations of high-order accuracy, finite difference methods. We evaluate the ability of the Cartesian scheme to handle shock propagation problems, and through convergence testing, we show that the code is indeed second-order accurate. To compare the Cartesian scheme discussed here with the spherical coordinate-based scheme discussed in Boss & Myhill (1992), the two codes are used to calculate the standard isothermal collapse test case described by Bodenheimer & Boss (1981). We find that with the improved codes, the intermediate bar-configuration found previously disappears, and the cloud fragments directly into a binary protostellar system. Finally, we present the results from both codes of a new test for nonisothermal protostellar collapse.

  7. A statistical test to show negligible trend

    Treesearch

    Philip M. Dixon; Joseph H.K. Pechmann

    2005-01-01

    The usual statistical tests of trend are inappropriate for demonstrating the absence of trend. This is because failure to reject the null hypothesis of no trend does not prove that null hypothesis. The appropriate statistical method is based on an equivalence test. The null hypothesis is that the trend is not zero, i.e., outside an a priori specified equivalence region...

  8. A scheme based on ICD-10 diagnoses and drug prescriptions to stage chronic kidney disease severity in healthcare administrative records.

    PubMed

    Friberg, Leif; Gasparini, Alessandro; Carrero, Juan Jesus

    2018-04-01

    Information about renal function is important for drug safety studies using administrative health databases. However, serum creatinine values are seldom available in these registries. Our aim was to develop and test a simple scheme for stratification of renal function without access to laboratory test results. Our scheme uses registry data about diagnoses, contacts, dialysis and drug use. We validated the scheme in the Stockholm CREAtinine Measurements (SCREAM) project using information on approximately 1.1 million individuals residing in the Stockholm County who underwent calibrated creatinine testing during 2006-11, linked with data about health care contacts and filled drug prescriptions. Estimated glomerular filtration rate (eGFR) was calculated with the CKD-EPI formula and used as the gold standard for validation of the scheme. When the scheme classified patients as having eGFR <30 mL/min/1.73 m 2 , it was correct in 93.5% of cases. The specificity of the scheme was close to 100% in all age groups. The sensitivity was poor, ranging from 68.2% in the youngest age quartile, down to 10.7% in the oldest age quartile. Age-related decline in renal function makes a large proportion of elderly patients fall into the chronic kidney disease (CKD) range without receiving CKD diagnoses, as this often is seen as part of normal ageing. In the absence of renal function tests, our scheme may be of value for identifying patients with moderate and severe CKD on the basis of diagnostic and prescription data for use in studies of large healthcare databases.

  9. Unadjusted Bivariate Two-Group Comparisons: When Simpler is Better.

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2018-01-01

    Hypothesis testing involves posing both a null hypothesis and an alternative hypothesis. This basic statistical tutorial discusses the appropriate use, including their so-called assumptions, of the common unadjusted bivariate tests for hypothesis testing and thus comparing study sample data for a difference or association. The appropriate choice of a statistical test is predicated on the type of data being analyzed and compared. The unpaired or independent samples t test is used to test the null hypothesis that the 2 population means are equal, thereby accepting the alternative hypothesis that the 2 population means are not equal. The unpaired t test is intended for comparing dependent continuous (interval or ratio) data from 2 study groups. A common mistake is to apply several unpaired t tests when comparing data from 3 or more study groups. In this situation, an analysis of variance with post hoc (posttest) intragroup comparisons should instead be applied. Another common mistake is to apply a series of unpaired t tests when comparing sequentially collected data from 2 study groups. In this situation, a repeated-measures analysis of variance, with tests for group-by-time interaction, and post hoc comparisons, as appropriate, should instead be applied in analyzing data from sequential collection points. The paired t test is used to assess the difference in the means of 2 study groups when the sample observations have been obtained in pairs, often before and after an intervention in each study subject. The Pearson chi-square test is widely used to test the null hypothesis that 2 unpaired categorical variables, each with 2 or more nominal levels (values), are independent of each other. When the null hypothesis is rejected, 1 concludes that there is a probable association between the 2 unpaired categorical variables. When comparing 2 groups on an ordinal or nonnormally distributed continuous outcome variable, the 2-sample t test is usually not appropriate. The Wilcoxon-Mann-Whitney test is instead preferred. When making paired comparisons on data that are ordinal, or continuous but nonnormally distributed, the Wilcoxon signed-rank test can be used. In analyzing their data, researchers should consider the continued merits of these simple yet equally valid unadjusted bivariate statistical tests. However, the appropriate use of an unadjusted bivariate test still requires a solid understanding of its utility, assumptions (requirements), and limitations. This understanding will mitigate the risk of misleading findings, interpretations, and conclusions.

  10. On-line determination of transient stability status using multilayer perceptron neural network

    NASA Astrophysics Data System (ADS)

    Frimpong, Emmanuel Asuming; Okyere, Philip Yaw; Asumadu, Johnson

    2018-01-01

    A scheme to predict transient stability status following a disturbance is presented. The scheme is activated upon the tripping of a line or bus and operates as follows: Two samples of frequency deviation values at all generator buses are obtained. At each generator bus, the maximum frequency deviation within the two samples is extracted. A vector is then constructed from the extracted maximum frequency deviations. The Euclidean norm of the constructed vector is calculated and then fed as input to a trained multilayer perceptron neural network which predicts the stability status of the system. The scheme was tested using data generated from the New England test system. The scheme successfully predicted the stability status of all two hundred and five disturbance test cases.

  11. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  12. Longitudinal Dimensionality of Adolescent Psychopathology: Testing the Differentiation Hypothesis

    ERIC Educational Resources Information Center

    Sterba, Sonya K.; Copeland, William; Egger, Helen L.; Costello, E. Jane; Erkanli, Alaattin; Angold, Adrian

    2010-01-01

    Background: The differentiation hypothesis posits that the underlying liability distribution for psychopathology is of low dimensionality in young children, inflating diagnostic comorbidity rates, but increases in dimensionality with age as latent syndromes become less correlated. This hypothesis has not been adequately tested with longitudinal…

  13. A large scale test of the gaming-enhancement hypothesis.

    PubMed

    Przybylski, Andrew K; Wang, John C

    2016-01-01

    A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  14. Null but not void: considerations for hypothesis testing.

    PubMed

    Shaw, Pamela A; Proschan, Michael A

    2013-01-30

    Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.

  15. Effect of climate-related mass extinctions on escalation in molluscs

    NASA Astrophysics Data System (ADS)

    Hansen, Thor A.; Kelley, Patricia H.; Melland, Vicky D.; Graham, Scott E.

    1999-12-01

    We test the hypothesis that escalated species (e.g., those with antipredatory adaptations such as heavy armor) are more vulnerable to extinctions caused by changes in climate. If this hypothesis is valid, recovery faunas after climate-related extinctions should include significantly fewer species with escalated shell characteristics, and escalated species should undergo greater rates of extinction than nonescalated species. This hypothesis is tested for the Cretaceous-Paleocene, Eocene-Oligocene, middle Miocene, and Pliocene-Pleistocene mass extinctions. Gastropod and bivalve molluscs from the U.S. coastal plain were evaluated for 10 shell characters that confer resistance to predators. Of 40 tests, one supported the hypothesis; highly ornamented gastropods underwent greater levels of Pliocene-Pleistocene extinction than did nonescalated species. All remaining tests were nonsignificant. The hypothesis that escalated species are more vulnerable to climate-related mass extinctions is not supported.

  16. Study on test of coal co-firing for 600MW ultra supercritical boiler with four walls tangential burning

    NASA Astrophysics Data System (ADS)

    Ying, Wu; Yong-lu, Zhong; Guo-mingi, Yin

    2018-06-01

    On account of nine commonly used coals in a Jiangxi Power Plant,two kinds of coal were selected to be applied in coal co-firing test through industrial analysis,elementary analysis and thermogravimetric analysis of coal.During the coal co-firing test,two load points were selected,three coal mixtures were prepared.Moreover,under each coal blending scheme, the optimal oxygen content was obtained by oxygen varying test. At last,by measuring the boiler efficiency and coal consumption of power supply in different coal co-firing schemes, the recommended coal co-firing scheme was obtained.

  17. On Restructurable Control System Theory

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1983-01-01

    The state of stochastic system and control theory as it impacts restructurable control issues is addressed. The multivariable characteristics of the control problem are addressed. The failure detection/identification problem is discussed as a multi-hypothesis testing problem. Control strategy reconfiguration, static multivariable controls, static failure hypothesis testing, dynamic multivariable controls, fault-tolerant control theory, dynamic hypothesis testing, generalized likelihood ratio (GLR) methods, and adaptive control are discussed.

  18. Perspectives on the Use of Null Hypothesis Statistical Testing. Part III: the Various Nuts and Bolts of Statistical and Hypothesis Testing

    ERIC Educational Resources Information Center

    Marmolejo-Ramos, Fernando; Cousineau, Denis

    2017-01-01

    The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…

  19. Testing Eurasian wild boar piglets for serum antibodies against Mycobacterium bovis.

    PubMed

    Che' Amat, A; González-Barrio, D; Ortiz, J A; Díez-Delgado, I; Boadella, M; Barasona, J A; Bezos, J; Romero, B; Armenteros, J A; Lyashchenko, K P; Venteo, A; Rueda, P; Gortázar, C

    2015-09-01

    Animal tuberculosis (TB) caused by infection with Mycobacterium bovis and closely related members of the M. tuberculosis complex (MTC), is often reported in the Eurasian wild boar (Sus scrofa). Tests detecting antibodies against MTC antigens are valuable tools for TB monitoring and control in suids. However, only limited knowledge exists on serology test performance in 2-6 month-old piglets. In this age-class, recent infections might cause lower antibody levels and lower test sensitivity. We examined 126 wild boar piglets from a TB-endemic site using 6 antibody detection tests in order to assess test performance. Bacterial culture (n=53) yielded a M. bovis infection prevalence of 33.9%, while serum antibody prevalence estimated by different tests ranged from 19% to 38%, reaching sensitivities between 15.4% and 46.2% for plate ELISAs and between 61.5% and 69.2% for rapid immunochromatographic tests based on dual path platform (DPP) technology. The Cohen kappa coefficient of agreement between DPP WTB (Wildlife TB) assay and culture results was moderate (0.45) and all other serological tests used had poor to fair agreements. This survey revealed the ability of several tests for detecting serum antibodies against the MTC antigens in 2-6 month-old naturally infected wild boar piglets. The best performance was demonstrated for DPP tests. The results confirmed our initial hypothesis of a lower sensitivity of serology for detecting M. bovis-infected piglets, as compared to older wild boar. Certain tests, notably the rapid animal-side tests, can contribute to TB control strategies by enabling the setup of test and cull schemes or improving pre-movement testing. However, sub-optimal test performance in piglets as compared to that in older wild boar should be taken into account. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Transonic small disturbances equation applied to the solution of two-dimensional nonsteady flows

    NASA Technical Reports Server (NTRS)

    Couston, M.; Angelini, J. J.; Mulak, P.

    1980-01-01

    Transonic nonsteady flows are of large practical interest. Aeroelastic instability prediction, control figured vehicle techniques or rotary wings in forward flight are some examples justifying the effort undertaken to improve knowledge of these problems is described. The numerical solution of these problems under the potential flow hypothesis is described. The use of an alternating direction implicit scheme allows the efficient resolution of the two dimensional transonic small perturbations equation.

  1. A pilot evaluation of two G-seat cueing schemes

    NASA Technical Reports Server (NTRS)

    Showalter, T. W.

    1978-01-01

    A comparison was made of two contrasting G-seat cueing schemes. The G-seat, an aircraft simulation subsystem, creates aircraft acceleration cues via seat contour changes. Of the two cueing schemes tested, one was designed to create skin pressure cues and the other was designed to create body position cues. Each cueing scheme was tested and evaluated subjectively by five pilots regarding its ability to cue the appropriate accelerations in each of four simple maneuvers: a pullout, a pushover, an S-turn maneuver, and a thrusting maneuver. A divergence of pilot opinion occurred, revealing that the perception and acceptance of G-seat stimuli is a highly individualistic phenomena. The creation of one acceptable G-seat cueing scheme was, therefore, deemed to be quite difficult.

  2. Development of a solution adaptive unstructured scheme for quasi-3D inviscid flows through advanced turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Usab, William J., Jr.; Jiang, Yi-Tsann

    1991-01-01

    The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.

  3. Revised standards for statistical evidence.

    PubMed

    Johnson, Valen E

    2013-11-26

    Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

  4. Spray algorithm without interface construction

    NASA Astrophysics Data System (ADS)

    Al-Kadhem Majhool, Ahmed Abed; Watkins, A. P.

    2012-05-01

    This research is aimed to create a new and robust family of convective schemes to capture the interface between the dispersed and the carrier phases in a spray without the need to build up the interface boundary. The selection of the Weighted Average Flux (WAF) scheme is due to this scheme being designed to deal with random flux scheme which is second-order accurate in space and time. The convective flux in each cell face utilizes the WAF scheme blended with Switching Technique for Advection and Capturing of Surfaces (STACS) scheme for high resolution flux limiters. In the next step, the high resolution scheme is blended with the WAF scheme to provide the sharpness and boundedness of the interface by using switching strategy. In this work, the Eulerian-Eulerian framework of non-reactive turbulent spray is set in terms of theoretical proposed methodology namely spray moments of drop size distribution, presented by Beck and Watkins [1]. The computational spray model avoids the need to segregate the local droplet number distribution into parcels of identical droplets. The proposed scheme is tested on capturing the spray edges in modelling hollow cone sprays without need to reconstruct two-phase interface. A test is made on simple comparison between TVD scheme and WAF scheme using the same flux limiter on convective flow hollow cone spray. Results show the WAF scheme gives a better prediction than TVD scheme. The only way to check the accuracy of the presented models is by evaluating the spray sheet thickness.

  5. The Gumbel hypothesis test for left censored observations using regional earthquake records as an example

    NASA Astrophysics Data System (ADS)

    Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.

    2011-01-01

    Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.

  6. Biostatistics Series Module 2: Overview of Hypothesis Testing.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore "statistically significant") P value, but a "real" estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another.

  7. Biostatistics Series Module 2: Overview of Hypothesis Testing

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore “statistically significant”) P value, but a “real” estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another. PMID:27057011

  8. Statistical Validation of Surrogate Endpoints: Another Look at the Prentice Criterion and Other Criteria.

    PubMed

    Saraf, Sanatan; Mathew, Thomas; Roy, Anindya

    2015-01-01

    For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.

  9. Test of association: which one is the most appropriate for my study?

    PubMed

    Gonzalez-Chica, David Alejandro; Bastos, João Luiz; Duquia, Rodrigo Pereira; Bonamigo, Renan Rangel; Martínez-Mesa, Jeovany

    2015-01-01

    Hypothesis tests are statistical tools widely used for assessing whether or not there is an association between two or more variables. These tests provide a probability of the type 1 error (p-value), which is used to accept or reject the null study hypothesis. To provide a practical guide to help researchers carefully select the most appropriate procedure to answer the research question. We discuss the logic of hypothesis testing and present the prerequisites of each procedure based on practical examples.

  10. Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.

    PubMed

    Chalmers, R Philip

    2018-06-01

    This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.

  11. Psychosocial mediators of change in physical activity in the Welsh national exercise referral scheme: secondary analysis of a randomised controlled trial.

    PubMed

    Littlecott, Hannah J; Moore, Graham F; Moore, Laurence; Murphy, Simon

    2014-08-27

    While an increasing number of randomised controlled trials report impacts of exercise referral schemes (ERS) on physical activity, few have investigated the mechanisms through which increases in physical activity are produced. This study examines whether a National Exercise Referral Scheme (NERS) in Wales is associated with improvements in autonomous motivation, self-efficacy and social support, and whether change in physical activity is mediated by change in these psychosocial processes. A pragmatic randomised controlled trial of NERS across 12 LHBs in Wales. Questionnaires measured demographic data and physical activity at baseline. Participants (N = 2160) with depression, anxiety or CHD risk factors were referred by health professionals and randomly assigned to control or intervention. At six months psychological process measures were collected by questionnaire. At 12 months physical activity was assessed by 7 Day PAR telephone interview. Regressions tested intervention effects on psychosocial variables, physical activity before and after adjusting for mediators and socio demographic patterning. Significant intervention effects were found for autonomous motivation and social support for exercise at 6 months. No intervention effect was observed for self-efficacy. The data are consistent with a hypothesis of partial mediation of the intervention effect by autonomous motivation. Analysis of moderators showed significant improvements in relative autonomy in all subgroups. The greatest improvements in autonomous motivation were observed among patients who were least active at baseline. The present study offered key insights into psychosocial processes of change in an exercise referral scheme, with effects on physical activity mediated by autonomous motivation. Findings support the use of self-determination theory as a framework for ERS. Further research is required to explain socio-demographic patterning in responses to ERS, with changes in motivation occurring among all sub-groups of participants, though not always leading to higher adherence or behavioural change. This highlights the importance of socio-ecological approaches to developing and evaluating behaviour change interventions, which consider factors beyond the individual, including conditions in which improved motivation does or does not produce behavioural change. ISRCTN47680448.

  12. ORILAM, a three-moment lognormal aerosol scheme for mesoscale atmospheric model: Online coupling into the Meso-NH-C model and validation on the Escompte campaign

    NASA Astrophysics Data System (ADS)

    Tulet, Pierre; Crassier, Vincent; Cousin, Frederic; Suhre, Karsten; Rosset, Robert

    2005-09-01

    Classical aerosol schemes use either a sectional (bin) or lognormal approach. Both approaches have particular capabilities and interests: the sectional approach is able to describe every kind of distribution, whereas the lognormal one makes assumption of the distribution form with a fewer number of explicit variables. For this last reason we developed a three-moment lognormal aerosol scheme named ORILAM to be coupled in three-dimensional mesoscale or CTM models. This paper presents the concept and hypothesis of a range of aerosol processes such as nucleation, coagulation, condensation, sedimentation, and dry deposition. One particular interest of ORILAM is to keep explicit the aerosol composition and distribution (mass of each constituent, mean radius, and standard deviation of the distribution are explicit) using the prediction of three-moment (m0, m3, and m6). The new model was evaluated by comparing simulations to measurements from the Escompte campaign and to a previously published aerosol model. The numerical cost of the lognormal mode is lower than two bins of the sectional one.

  13. RAS screening in colorectal cancer: a comprehensive analysis of the results from the UK NEQAS colorectal cancer external quality assurance schemes (2009-2016).

    PubMed

    Richman, Susan D; Fairley, Jennifer; Butler, Rachel; Deans, Zandra C

    2017-12-01

    Evidence strongly indicates that extended RAS testing should be undertaken in mCRC patients, prior to prescribing anti-EGFR therapies. With more laboratories implementing testing, the requirement for External Quality Assurance schemes increases, thus ensuring high standards of molecular analysis. Data was analysed from 15 United Kingdom National External Quality Assessment Service (UK NEQAS) for Molecular Genetics Colorectal cancer external quality assurance (EQA) schemes, delivered between 2009 and 2016. Laboratories were provided annually with nine colorectal tumour samples for genotyping. Information on methodology and extent of testing coverage was requested, and scores given for genotyping, interpretation and clerical accuracy. There has been a sixfold increase in laboratory participation (18 in 2009 to 108 in 2016). For RAS genotyping, fewer laboratories now use Roche cobas®, pyrosequencing and Sanger sequencing, with more moving to next generation sequencing (NGS). NGS is the most commonly employed technology for BRAF and PIK3CA mutation screening. KRAS genotyping errors were seen in ≤10% laboratories, until the 2014-2015 scheme, when there was an increase to 16.7%, corresponding to a large increase in scheme participants. NRAS genotyping errors peaked at 25.6% in the first 2015-2016 scheme but subsequently dropped to below 5%. Interpretation and clerical accuracy scores have been consistently good throughout. Within this EQA scheme, we have observed that the quality of molecular analysis for colorectal cancer has continued to improve, despite changes in the required targets, the volume of testing and the technologies employed. It is reassuring to know that laboratories clearly recognise the importance of participating in EQA schemes.

  14. Genetic progress in multistage dairy cattle breeding schemes using genetic markers.

    PubMed

    Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P

    2005-04-01

    The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.

  15. Can computer-aided diagnosis (CAD) help radiologists find mammographically missed screening cancers?

    NASA Astrophysics Data System (ADS)

    Nishikawa, Robert M.; Giger, Maryellen L.; Schmidt, Robert A.; Papaioannou, John

    2001-06-01

    We present data from a pilot observer study whose goal is design a study to test the hypothesis that computer-aided diagnosis (CAD) can improve radiologists' performance in reading screening mammograms. In a prospective evaluation of our computer detection schemes, we have analyzed over 12,000 clinical exams. Retrospective review of the negative screening mammograms for all cancer cases found an indication of the cancer in 23 of these negative cases. The computer found 54% of these in our prospective testing. We added to these cases normal exams to create a dataset of 75 cases. Four radiologists experienced in mammography read the cases and gave their BI-RADS assessment and their confidence that the patient should be called back for diagnostic mammography. They did so once reading the films only and a second time reading with the computer aid. Three radiologists had no change in area under the ROC curve (mean Az of 0.73) and one improved from 0.73 to 0.78, but this difference failed to reach statistical significance (p equals 0.23). These data are being used to plan a larger more powerful study.

  16. A cancelable biometric scheme based on multi-lead ECGs.

    PubMed

    Peng-Tzu Chen; Shun-Chi Wu; Jui-Hsuan Hsieh

    2017-07-01

    Biometric technologies offer great advantages over other recognition methods, but there are concerns that they may compromise the privacy of individuals. In this paper, an electrocardiogram (ECG)-based cancelable biometric scheme is proposed to relieve such concerns. In this scheme, distinct biometric templates for a given beat bundle are constructed via "subspace collapsing." To determine the identity of any unknown beat bundle, the multiple signal classification (MUSIC) algorithm, incorporating a "suppression and poll" strategy, is adopted. Unlike the existing cancelable biometric schemes, knowledge of the distortion transform is not required for recognition. Experiments with real ECGs from 285 subjects are presented to illustrate the efficacy of the proposed scheme. The best recognition rate of 97.58 % was achieved under the test condition N train = 10 and N test = 10.

  17. CAVIAR: CLASSIFICATION VIA AGGREGATED REGRESSION AND ITS APPLICATION IN CLASSIFYING OASIS BRAIN DATABASE

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Vemuri, Baba C.

    2010-01-01

    This paper presents a novel classification via aggregated regression algorithm – dubbed CAVIAR – and its application to the OASIS MRI brain image database. The CAVIAR algorithm simultaneously combines a set of weak learners based on the assumption that the weight combination for the final strong hypothesis in CAVIAR depends on both the weak learners and the training data. A regularization scheme using the nearest neighbor method is imposed in the testing stage to avoid overfitting. A closed form solution to the cost function is derived for this algorithm. We use a novel feature – the histogram of the deformation field between the MRI brain scan and the atlas which captures the structural changes in the scan with respect to the atlas brain – and this allows us to automatically discriminate between various classes within OASIS [1] using CAVIAR. We empirically show that CAVIAR significantly increases the performance of the weak classifiers by showcasing the performance of our technique on OASIS. PMID:21151847

  18. CAVIAR: CLASSIFICATION VIA AGGREGATED REGRESSION AND ITS APPLICATION IN CLASSIFYING OASIS BRAIN DATABASE.

    PubMed

    Chen, Ting; Rangarajan, Anand; Vemuri, Baba C

    2010-04-14

    This paper presents a novel classification via aggregated regression algorithm - dubbed CAVIAR - and its application to the OASIS MRI brain image database. The CAVIAR algorithm simultaneously combines a set of weak learners based on the assumption that the weight combination for the final strong hypothesis in CAVIAR depends on both the weak learners and the training data. A regularization scheme using the nearest neighbor method is imposed in the testing stage to avoid overfitting. A closed form solution to the cost function is derived for this algorithm. We use a novel feature - the histogram of the deformation field between the MRI brain scan and the atlas which captures the structural changes in the scan with respect to the atlas brain - and this allows us to automatically discriminate between various classes within OASIS [1] using CAVIAR. We empirically show that CAVIAR significantly increases the performance of the weak classifiers by showcasing the performance of our technique on OASIS.

  19. The Importance of Teaching Power in Statistical Hypothesis Testing

    ERIC Educational Resources Information Center

    Olinsky, Alan; Schumacher, Phyllis; Quinn, John

    2012-01-01

    In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…

  20. The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.

    ERIC Educational Resources Information Center

    Luster, Tom; And Others

    1989-01-01

    Used data on 65 mother-infant dyads to test Kohn's hypothesis concerning the relation between values and parenting behavior. Findings support Kohn's hypothesis that parents who value self-direction would emphasize supportive function of parenting and parents who value conformity would emphasize their obligations to impose restraints. (Author/NB)

  1. Cognitive Biases in the Interpretation of Autonomic Arousal: A Test of the Construal Bias Hypothesis

    ERIC Educational Resources Information Center

    Ciani, Keith D.; Easter, Matthew A.; Summers, Jessica J.; Posada, Maria L.

    2009-01-01

    According to Bandura's construal bias hypothesis, derived from social cognitive theory, persons with the same heightened state of autonomic arousal may experience either pleasant or deleterious emotions depending on the strength of perceived self-efficacy. The current study tested this hypothesis by proposing that college students' preexisting…

  2. Is Conscious Stimulus Identification Dependent on Knowledge of the Perceptual Modality? Testing the “Source Misidentification Hypothesis”

    PubMed Central

    Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim

    2013-01-01

    This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677

  3. A large scale test of the gaming-enhancement hypothesis

    PubMed Central

    Wang, John C.

    2016-01-01

    A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work. PMID:27896035

  4. Error determination of a successive correction type objective analysis scheme. [for surface meteorological data

    NASA Technical Reports Server (NTRS)

    Smith, D. R.; Leslie, F. W.

    1984-01-01

    The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a successive correction type scheme for the analysis of surface meteorological data. The scheme is subjected to a series of experiments to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple pass technique increases the accuracy of the analysis. Furthermore, the tests suggest appropriate values for the analysis parameters in resolving disturbances for the data set used in this investigation.

  5. Validation of a selective ensemble-based classification scheme for myoelectric control using a three-dimensional Fitts' Law test.

    PubMed

    Scheme, Erik J; Englehart, Kevin B

    2013-07-01

    When controlling a powered upper limb prosthesis it is important not only to know how to move the device, but also when not to move. A novel approach to pattern recognition control, using a selective multiclass one-versus-one classification scheme has been shown to be capable of rejecting unintended motions. This method was shown to outperform other popular classification schemes when presented with muscle contractions that did not correspond to desired actions. In this work, a 3-D Fitts' Law test is proposed as a suitable alternative to using virtual limb environments for evaluating real-time myoelectric control performance. The test is used to compare the selective approach to a state-of-the-art linear discriminant analysis classification based scheme. The framework is shown to obey Fitts' Law for both control schemes, producing linear regression fittings with high coefficients of determination (R(2) > 0.936). Additional performance metrics focused on quality of control are discussed and incorporated in the evaluation. Using this framework the selective classification based scheme is shown to produce significantly higher efficiency and completion rates, and significantly lower overshoot and stopping distances, with no significant difference in throughput.

  6. LIKELIHOOD RATIO TESTS OF HYPOTHESES ON MULTIVARIATE POPULATIONS, VOLUME II, TEST OF HYPOTHESIS--STATISTICAL MODELS FOR THE EVALUATION AND INTERPRETATION OF EDUCATIONAL CRITERIA. PART 4.

    ERIC Educational Resources Information Center

    SAW, J.G.

    THIS PAPER DEALS WITH SOME TESTS OF HYPOTHESIS FREQUENTLY ENCOUNTERED IN THE ANALYSIS OF MULTIVARIATE DATA. THE TYPE OF HYPOTHESIS CONSIDERED IS THAT WHICH THE STATISTICIAN CAN ANSWER IN THE NEGATIVE OR AFFIRMATIVE. THE DOOLITTLE METHOD MAKES IT POSSIBLE TO EVALUATE THE DETERMINANT OF A MATRIX OF HIGH ORDER, TO SOLVE A MATRIX EQUATION, OR TO…

  7. Simplified two-dimensional microwave imaging scheme using metamaterial-loaded Vivaldi antenna

    NASA Astrophysics Data System (ADS)

    Johari, Esha; Akhter, Zubair; Bhaskar, Manoj; Akhtar, M. Jaleel

    2017-03-01

    In this paper, a highly efficient, low-cost scheme for two-dimensional microwave imaging is proposed. To this end, the AZIM (anisotropic zero index metamaterial) cell-loaded Vivaldi antenna is designed and tested as effective electromagnetic radiation beam source required in the microwave imaging scheme. The designed antenna is first individually tested in the anechoic chamber, and its directivity along with the radiation pattern is obtained. The measurement setup for the imaging here involves a vector network analyzer, the AZIM cell-loaded ultra-wideband Vivaldi antenna, and other associated microwave components. The potential of the designed antenna for the microwave imaging is tested by first obtaining the two-dimensional reflectivity images of metallic samples of different shapes placed in front of the antenna, using the proposed scheme. In the next step, these sets of samples are hidden behind wooden blocks of different thicknesses and the reflectivity image of the test media is reconstructed by using the proposed scheme. Finally, the reflectivity images of various dielectric samples (Teflon, Plexiglas, permanent magnet moving coil) along with the copper sheet placed on a piece of cardboard are reconstructed by using the proposed setup. The images obtained for each case are plotted and compared with the actual objects, and a close match is observed which shows the applicability of the proposed scheme for through-wall imaging and the detection of concealed objects.

  8. Perfect Detection of Spikes in the Linear Sub-threshold Dynamics of Point Neurons

    PubMed Central

    Krishnan, Jeyashree; Porta Mana, PierGianLuca; Helias, Moritz; Diesmann, Markus; Di Napoli, Edoardo

    2018-01-01

    Spiking neuronal networks are usually simulated with one of three main schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work offers an alternative geometric point of view on neuronal dynamics, and derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but can be optimized in several ways. Comparison confirms earlier results that the imperfect tests rarely miss spikes (less than a fraction 1/108 of missed spikes) in biologically relevant settings. PMID:29379430

  9. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    PubMed

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  10. Motor control theories and their applications.

    PubMed

    Latash, Mark L; Levin, Mindy F; Scholz, John P; Schöner, Gregor

    2010-01-01

    We describe several influential hypotheses in the field of motor control including the equilibrium-point (referent configuration) hypothesis, the uncontrolled manifold hypothesis, and the idea of synergies based on the principle of motor abundance. The equilibrium-point hypothesis is based on the idea of control with thresholds for activation of neuronal pools; it provides a framework for analysis of both voluntary and involuntary movements. In particular, control of a single muscle can be adequately described with changes in the threshold of motor unit recruitment during slow muscle stretch (threshold of the tonic stretch reflex). Unlike the ideas of internal models, the equilibrium-point hypothesis does not assume neural computations of mechanical variables. The uncontrolled manifold hypothesis is based on the dynamic system approach to movements; it offers a toolbox to analyze synergic changes within redundant sets of elements related to stabilization of potentially important performance variables. The referent configuration hypothesis and the principle of abundance can be naturally combined into a single coherent scheme of control of multi-element systems. A body of experimental data on healthy persons and patients with movement disorders are reviewed in support of the mentioned hypotheses. In particular, movement disorders associated with spasticity are considered as consequences of an impaired ability to shift threshold of the tonic stretch reflex within the whole normal range. Technical details and applications of the mentioned hypo-theses to studies of motor learning are described. We view the mentioned hypotheses as the most promising ones in the field of motor control, based on a solid physical and neurophysiological foundation.

  11. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations. Part 1; Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2009-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.

  12. A test of multiple hypotheses for the function of call sharing in female budgerigars, Melopsittacus undulatus

    PubMed Central

    Young, Anna M.; Cordier, Breanne; Mundry, Roger; Wright, Timothy F.

    2014-01-01

    In many social species group, members share acoustically similar calls. Functional hypotheses have been proposed for call sharing, but previous studies have been limited by an inability to distinguish among these hypotheses. We examined the function of vocal sharing in female budgerigars with a two-part experimental design that allowed us to distinguish between two functional hypotheses. The social association hypothesis proposes that shared calls help animals mediate affiliative and aggressive interactions, while the password hypothesis proposes that shared calls allow animals to distinguish group identity and exclude nonmembers. We also tested the labeling hypothesis, a mechanistic explanation which proposes that shared calls are used to address specific individuals within the sender–receiver relationship. We tested the social association hypothesis by creating four–member flocks of unfamiliar female budgerigars (Melopsittacus undulatus) and then monitoring the birds’ calls, social behaviors, and stress levels via fecal glucocorticoid metabolites. We tested the password hypothesis by moving immigrants into established social groups. To test the labeling hypothesis, we conducted additional recording sessions in which individuals were paired with different group members. The social association hypothesis was supported by the development of multiple shared call types in each cage and a correlation between the number of shared call types and the number of aggressive interactions between pairs of birds. We also found support for calls serving as a labeling mechanism using discriminant function analysis with a permutation procedure. Our results did not support the password hypothesis, as there was no difference in stress or directed behaviors between immigrant and control birds. PMID:24860236

  13. Multifunctional and Context-Dependent Control of Vocal Acoustics by Individual Muscles

    PubMed Central

    Srivastava, Kyle H.; Elemans, Coen P.H.

    2015-01-01

    The relationship between muscle activity and behavioral output determines how the brain controls and modifies complex skills. In vocal control, ensembles of muscles are used to precisely tune single acoustic parameters such as fundamental frequency and sound amplitude. If individual vocal muscles were dedicated to the control of single parameters, then the brain could control each parameter independently by modulating the appropriate muscle or muscles. Alternatively, if each muscle influenced multiple parameters, a more complex control strategy would be required to selectively modulate a single parameter. Additionally, it is unknown whether the function of single muscles is fixed or varies across different vocal gestures. A fixed relationship would allow the brain to use the same changes in muscle activation to, for example, increase the fundamental frequency of different vocal gestures, whereas a context-dependent scheme would require the brain to calculate different motor modifications in each case. We tested the hypothesis that single muscles control multiple acoustic parameters and that the function of single muscles varies across gestures using three complementary approaches. First, we recorded electromyographic data from vocal muscles in singing Bengalese finches. Second, we electrically perturbed the activity of single muscles during song. Third, we developed an ex vivo technique to analyze the biomechanical and acoustic consequences of single-muscle perturbations. We found that single muscles drive changes in multiple parameters and that the function of single muscles differs across vocal gestures, suggesting that the brain uses a complex, gesture-dependent control scheme to regulate vocal output. PMID:26490859

  14. LevelScheme: A level scheme drawing and scientific figure preparation system for Mathematica

    NASA Astrophysics Data System (ADS)

    Caprio, M. A.

    2005-09-01

    LevelScheme is a scientific figure preparation system for Mathematica. The main emphasis is upon the construction of level schemes, or level energy diagrams, as used in nuclear, atomic, molecular, and hadronic physics. LevelScheme also provides a general infrastructure for the preparation of publication-quality figures, including support for multipanel and inset plotting, customizable tick mark generation, and various drawing and labeling tasks. Coupled with Mathematica's plotting functions and powerful programming language, LevelScheme provides a flexible system for the creation of figures combining diagrams, mathematical plots, and data plots. Program summaryTitle of program:LevelScheme Catalogue identifier:ADVZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVZ Operating systems:Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux Programming language used:Mathematica 4 Number of bytes in distributed program, including test and documentation:3 051 807 Distribution format:tar.gz Nature of problem:Creation of level scheme diagrams. Creation of publication-quality multipart figures incorporating diagrams and plots. Method of solution:A set of Mathematica packages has been developed, providing a library of level scheme drawing objects, tools for figure construction and labeling, and control code for producing the graphics.

  15. Finite volume treatment of dispersion-relation-preserving and optimized prefactored compact schemes for wave propagation

    NASA Astrophysics Data System (ADS)

    Popescu, Mihaela; Shyy, Wei; Garbey, Marc

    2005-12-01

    In developing suitable numerical techniques for computational aero-acoustics, the dispersion-relation-preserving (DRP) scheme by Tam and co-workers and the optimized prefactored compact (OPC) scheme by Ashcroft and Zhang have shown desirable properties of reducing both dissipative and dispersive errors. These schemes, originally based on the finite difference, attempt to optimize the coefficients for better resolution of short waves with respect to the computational grid while maintaining pre-determined formal orders of accuracy. In the present study, finite volume formulations of both schemes are presented to better handle the nonlinearity and complex geometry encountered in many engineering applications. Linear and nonlinear wave equations, with and without viscous dissipation, have been adopted as the test problems. Highlighting the principal characteristics of the schemes and utilizing linear and nonlinear wave equations with different wavelengths as the test cases, the performance of these approaches is documented. For the linear wave equation, there is no major difference between the DRP and OPC schemes. For the nonlinear wave equations, the finite volume version of both DRP and OPC schemes offers substantially better solutions in regions of high gradient or discontinuity.

  16. A class of the van Leer-type transport schemes and its application to the moisture transport in a general circulation model

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Chao, Winston C.; Sud, Y. C.; Walker, G. K.

    1994-01-01

    A generalized form of the second-order van Leer transport scheme is derived. Several constraints to the implied subgrid linear distribution are discussed. A very simple positive-definite scheme can be derived directly from the generalized form. A monotonic version of the scheme is applied to the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) for the moisture transport calculations, replacing the original fourth-order center-differencing scheme. Comparisons with the original scheme are made in idealized tests as well as in a summer climate simulation using the full GLA GCM. A distinct advantage of the monotonic transport scheme is its ability to transport sharp gradients without producing spurious oscillations and unphysical negative mixing ratio. Within the context of low-resolution climate simulations, the aforementioned characteristics are demonstrated to be very beneficial in regions where cumulus convection is active. The model-produced precipitation pattern using the new transport scheme is more coherently organized both in time and in space, and correlates better with observations. The side effect of the filling algorithm used in conjunction with the original scheme is also discussed, in the context of idealized tests. The major weakness of the proposed transport scheme with a local monotonic constraint is its substantial implicit diffusion at low resolution. Alternative constraints are discussed to counter this problem.

  17. Phase II design with sequential testing of hypotheses within each stage.

    PubMed

    Poulopoulou, Stavroula; Karlis, Dimitris; Yiannoutsos, Constantin T; Dafni, Urania

    2014-01-01

    The main goal of a Phase II clinical trial is to decide, whether a particular therapeutic regimen is effective enough to warrant further study. The hypothesis tested by Fleming's Phase II design (Fleming, 1982) is [Formula: see text] versus [Formula: see text], with level [Formula: see text] and with a power [Formula: see text] at [Formula: see text], where [Formula: see text] is chosen to represent the response probability achievable with standard treatment and [Formula: see text] is chosen such that the difference [Formula: see text] represents a targeted improvement with the new treatment. This hypothesis creates a misinterpretation mainly among clinicians that rejection of the null hypothesis is tantamount to accepting the alternative, and vice versa. As mentioned by Storer (1992), this introduces ambiguity in the evaluation of type I and II errors and the choice of the appropriate decision at the end of the study. Instead of testing this hypothesis, an alternative class of designs is proposed in which two hypotheses are tested sequentially. The hypothesis [Formula: see text] versus [Formula: see text] is tested first. If this null hypothesis is rejected, the hypothesis [Formula: see text] versus [Formula: see text] is tested next, in order to examine whether the therapy is effective enough to consider further testing in a Phase III study. For the derivation of the proposed design the exact binomial distribution is used to calculate the decision cut-points. The optimal design parameters are chosen, so as to minimize the average sample number (ASN) under specific upper bounds for error levels. The optimal values for the design were found using a simulated annealing method.

  18. Theoretical characterization of photoinduced electron transfer in rigidly linked donor-acceptor molecules: the fragment charge difference and the generalized Mulliken-Hush schemes

    NASA Astrophysics Data System (ADS)

    Lee, Sheng-Jui; Chen, Hung-Cheng; You, Zhi-Qiang; Liu, Kuan-Lin; Chow, Tahsin J.; Chen, I.-Chia; Hsu, Chao-Ping

    2010-10-01

    We calculate the electron transfer (ET) rates for a series of heptacyclo[6.6.0.02,6.03,13.014,11.05,9.010,14]-tetradecane (HCTD) linked donor-acceptor molecules. The electronic coupling factor was calculated by the fragment charge difference (FCD) [19] and the generalized Mulliken-Hush (GMH) schemes [20]. We found that the FCD is less prone to problems commonly seen in the GMH scheme, especially when the coupling values are small. For a 3-state case where the charge transfer (CT) state is coupled with two different locally excited (LE) states, we tested with the 3-state approach for the GMH scheme [30], and found that it works well with the FCD scheme. A simplified direct diagonalization based on Rust's 3-state scheme was also proposed and tested. This simplified scheme does not require a manual assignment of the states, and it yields coupling values that are largely similar to those from the full Rust's approach. The overall electron transfer (ET) coupling rates were also calculated.

  19. An Extension of RSS-based Model Comparison Tests for Weighted Least Squares

    DTIC Science & Technology

    2012-08-22

    use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model

  20. Multi-scale Eulerian model within the new National Environmental Modeling System

    NASA Astrophysics Data System (ADS)

    Janjic, Zavisa; Janjic, Tijana; Vasic, Ratko

    2010-05-01

    The unified Non-hydrostatic Multi-scale Model on the Arakawa B grid (NMMB) is being developed at NCEP within the National Environmental Modeling System (NEMS). The finite-volume horizontal differencing employed in the model preserves important properties of differential operators and conserves a variety of basic and derived dynamical and quadratic quantities. Among these, conservation of energy and enstrophy improves the accuracy of nonlinear dynamics of the model. Within further model development, advection schemes of fourth order of formal accuracy have been developed. It is argued that higher order advection schemes should not be used in the thermodynamic equation in order to preserve consistency with the second order scheme used for computation of the pressure gradient force. Thus, the fourth order scheme is applied only to momentum advection. Three sophisticated second order schemes were considered for upgrade. Two of them, proposed in Janjic(1984), conserve energy and enstrophy, but with enstrophy calculated differently. One of them conserves enstrophy as computed by the most accurate second order Laplacian operating on stream function. The other scheme conserves enstrophy as computed from the B grid velocity. The third scheme (Arakawa 1972) is arithmetic mean of the former two. It does not conserve enstrophy strictly, but it conserves other quadratic quantities that control the nonlinear energy cascade. Linearization of all three schemes leads to the same second order linear advection scheme. The second order term of the truncation error of the linear advection scheme has a special form so that it can be eliminated by simply preconditioning the advected quantity. Tests with linear advection of a cone confirm the advantage of the fourth order scheme. However, if a localized, large amplitude and high wave-number pattern is present in initial conditions, the clear advantage of the fourth order scheme disappears. In real data runs, problems with noisy data may appear due to mountains. Thus, accuracy and formal accuracy may not be synonymous. The nonlinear fourth order schemes are quadratic conservative and reduce to the Arakawa Jacobian in case of non-divergent flow. In case of general flow the conservation properties of the new momentum advection schemes impose stricter constraint on the nonlinear cascade than the original second order schemes. However, for non-divergent flow, the conservation properties of the fourth order schemes cannot be proven in the same way as those of the original second order schemes. Therefore, nonlinear tests were carried out in order to check how well the fourth order schemes control the nonlinear energy cascade. In the tests nonlinear shallow water equations are solved in a rotating rectangular domain (Janjic, 1984). The domain is covered with only 17 x 17 grid points. A diagnostic quantity is used to monitor qualitative changes in the spectrum over 116 days of simulated time. All schemes maintained meaningful solutions throughout the test. Among the second order schemes, the best result was obtained with the scheme that conserved enstrophy as computed by the second order Laplacian of the stream function. It was closely followed by the Arakawa (1972) scheme, while the remaining scheme was distant third. The fourth order schemes ranked in the same order, and were competitive throughout the experiments with their second order counterparts in preventing accumulation of energy at small scales. Finally, the impact was examined of the fourth order momentum advection on global medium range forecasts. The 500 mb anomaly correlation coefficient is used as a measure of success of the forecasts. Arakawa, A., 1972: Design of the UCLA general circulation model. Tech. Report No. 7, Department of Meteorology, University of California, Los Angeles, 116 pp. Janjic, Z. I., 1984: Non-linear advection schemes and energy cascade on semi-staggered grids. Monthly Weather Review, 112, 1234-1245.

  1. Performance of Clinical Laboratories in South African Parasitology Proficiency Testing Surveys between 2004 and 2010

    PubMed Central

    Dini, Leigh; Frean, John

    2012-01-01

    Performance in proficiency testing (PT) schemes is an objective measure of a laboratory's best performance. We examined the performance of participants in two parasitology PT schemes in South Africa from 2004 through 2010. The average rates of acceptable scores over the period were 58% and 66% for the stool and blood parasite schemes, respectively. In our setting, participation in PT alone is insufficient to improve performance; a policy that provides additional resources and training seems necessary. PMID:22814470

  2. Hypothesis testing of scientific Monte Carlo calculations.

    PubMed

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  3. Hypothesis testing of scientific Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  4. Sex ratios in the two Germanies: a test of the economic stress hypothesis.

    PubMed

    Catalano, Ralph A

    2003-09-01

    Literature describing temporal variation in the secondary sex ratio among humans reports an association between population stressors and declines in the odds of male birth. Explanations of this phenomenon draw on reports that stressed females spontaneously abort male more than female fetuses, and that stressed males exhibit reduced sperm motility. This work has led to the argument that population stress induced by a declining economy reduces the human sex ratio. No direct test of this hypothesis appears in the literature. Here, a test is offered based on a comparison of the sex ratio in East and West Germany for the years 1946 to 1999. The theory suggests that the East German sex ratio should be lower in 1991, when East Germany's economy collapsed, than expected from its own history and from the sex ratio in West Germany. The hypothesis is tested using time-series modelling methods. The data support the hypothesis. The sex ratio in East Germany was at its lowest in 1991. This first direct test supports the hypothesis that economic decline reduces the human sex ratio.

  5. Understanding suicide terrorism: premature dismissal of the religious-belief hypothesis.

    PubMed

    Liddle, James R; Machluf, Karin; Shackelford, Todd K

    2010-07-06

    We comment on work by Ginges, Hansen, and Norenzayan (2009), in which they compare two hypotheses for predicting individual support for suicide terrorism: the religious-belief hypothesis and the coalitional-commitment hypothesis. Although we appreciate the evidence provided in support of the coalitional-commitment hypothesis, we argue that their method of testing the religious-belief hypothesis is conceptually flawed, thus calling into question their conclusion that the religious-belief hypothesis has been disconfirmed. In addition to critiquing the methodology implemented by Ginges et al., we provide suggestions on how the religious-belief hypothesis may be properly tested. It is possible that the premature and unwarranted conclusions reached by Ginges et al. may deter researchers from examining the effect of specific religious beliefs on support for terrorism, and we hope that our comments can mitigate this possibility.

  6. Inferring electric fields and currents from ground magnetometer data - A test with theoretically derived inputs

    NASA Technical Reports Server (NTRS)

    Wolf, R. A.; Kamide, Y.

    1983-01-01

    Advanced techniques considered by Kamide et al. (1981) seem to have the potential for providing observation-based high time resolution pictures of the global ionospheric current and electric field patterns for interesting events. However, a reliance on the proposed magnetogram-inversion schemes for the deduction of global ionospheric current and electric field patterns requires proof that reliable results are obtained. 'Theoretical' tests of the accuracy of the magnetogram inversion schemes have, therefore, been considered. The present investigation is concerned with a test, involving the developed KRM algorithm and the Rice Convection Model (RCM). The test was successful in the sense that there was overall agreement between electric fields and currents calculated by the RCM and KRM schemes.

  7. Testing hypotheses and the advancement of science: recent attempts to falsify the equilibrium point hypothesis.

    PubMed

    Feldman, Anatol G; Latash, Mark L

    2005-02-01

    Criticisms of the equilibrium point (EP) hypothesis have recently appeared that are based on misunderstandings of some of its central notions. Starting from such interpretations of the hypothesis, incorrect predictions are made and tested. When the incorrect predictions prove false, the hypothesis is claimed to be falsified. In particular, the hypothesis has been rejected based on the wrong assumptions that it conflicts with empirically defined joint stiffness values or that it is incompatible with violations of equifinality under certain velocity-dependent perturbations. Typically, such attempts use notions describing the control of movements of artificial systems in place of physiologically relevant ones. While appreciating constructive criticisms of the EP hypothesis, we feel that incorrect interpretations have to be clarified by reiterating what the EP hypothesis does and does not predict. We conclude that the recent claims of falsifying the EP hypothesis and the calls for its replacement by EMG-force control hypothesis are unsubstantiated. The EP hypothesis goes far beyond the EMG-force control view. In particular, the former offers a resolution for the famous posture-movement paradox while the latter fails to resolve it.

  8. Enhancement of the Open National Combustion Code (OpenNCC) and Initial Simulation of Energy Efficient Engine Combustor

    NASA Technical Reports Server (NTRS)

    Miki, Kenji; Moder, Jeff; Liou, Meng-Sing

    2016-01-01

    In this paper, we present the recent enhancement of the Open National Combustion Code (OpenNCC) and apply the OpenNCC to model a realistic combustor configuration (Energy Efficient Engine (E3)). First, we perform a series of validation tests for the newly-implemented advection upstream splitting method (AUSM) and the extended version of the AUSM-family schemes (AUSM+-up). Compared with the analytical/experimental data of the validation tests, we achieved good agreement. In the steady-state E3 cold flow results using the Reynolds-averaged Navier-Stokes(RANS), we find a noticeable difference in the flow fields calculated by the two different numerical schemes, the standard Jameson- Schmidt-Turkel (JST) scheme and the AUSM scheme. The main differences are that the AUSM scheme is less numerical dissipative and it predicts much stronger reverse flow in the recirculation zone. This study indicates that two schemes could show different flame-holding predictions and overall flame structures.

  9. A modified F/A-18A sporting a distinctive red, white and blue paint scheme is the test aircraft for

    NASA Technical Reports Server (NTRS)

    2001-01-01

    A modified F/A-18A sporting a distinctive red, white and blue paint scheme is the test aircraft for the Active Aeroelastic Wing (AAW) project at NASA's Dryden Flight Research Center, Edwards, California.

  10. This modified F/A-18A with its distinctive red, white and blue paint scheme is the test aircraft for

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This modified F/A-18A with its distinctive red, white and blue paint scheme is the test aircraft for the Active Aeroelastic Wing (AAW) project at NASA's Dryden Flight Research Center, Edwards, California.

  11. Action perception as hypothesis testing.

    PubMed

    Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni

    2017-04-01

    We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Secure and Efficient Key Coordination Algorithm for Line Topology Network Maintenance for Use in Maritime Wireless Sensor Networks.

    PubMed

    Elgenaidi, Walid; Newe, Thomas; O'Connell, Eoin; Toal, Daniel; Dooly, Gerard

    2016-12-21

    There has been a significant increase in the proliferation and implementation of Wireless Sensor Networks (WSNs) in different disciplines, including the monitoring of maritime environments, healthcare systems, and industrial sectors. It has now become critical to address the security issues of data communication while considering sensor node constraints. There are many proposed schemes, including the scheme being proposed in this paper, to ensure that there is a high level of security in WSNs. This paper presents a symmetric security scheme for a maritime coastal environment monitoring WSN. The scheme provides security for travelling packets via individually encrypted links between authenticated neighbors, thus avoiding a reiteration of a global rekeying process. Furthermore, this scheme proposes a dynamic update key based on a trusted node configuration, called a leader node, which works as a trusted third party. The technique has been implemented in real time on a Waspmote test bed sensor platform and the results from both field testing and indoor bench testing environments are discussed in this paper.

  13. Secure and Efficient Key Coordination Algorithm for Line Topology Network Maintenance for Use in Maritime Wireless Sensor Networks

    PubMed Central

    Elgenaidi, Walid; Newe, Thomas; O’Connell, Eoin; Toal, Daniel; Dooly, Gerard

    2016-01-01

    There has been a significant increase in the proliferation and implementation of Wireless Sensor Networks (WSNs) in different disciplines, including the monitoring of maritime environments, healthcare systems, and industrial sectors. It has now become critical to address the security issues of data communication while considering sensor node constraints. There are many proposed schemes, including the scheme being proposed in this paper, to ensure that there is a high level of security in WSNs. This paper presents a symmetric security scheme for a maritime coastal environment monitoring WSN. The scheme provides security for travelling packets via individually encrypted links between authenticated neighbors, thus avoiding a reiteration of a global rekeying process. Furthermore, this scheme proposes a dynamic update key based on a trusted node configuration, called a leader node, which works as a trusted third party. The technique has been implemented in real time on a Waspmote test bed sensor platform and the results from both field testing and indoor bench testing environments are discussed in this paper. PMID:28009834

  14. A Fluorescence Correlation Spectroscopy Study of the Cryoprotective Mechanism of Glucose on Hemocyanin

    NASA Astrophysics Data System (ADS)

    Hauger, Eric J.

    Cryopreservation is the method of preserving biomaterials by cooling and storing them at very low temperatures. In order to prevent the damaging effects of cooling, cryoprotectants are used to inhibit ice formation. Common cryoprotectants used today include ethylene glycol, propylene glycol, dimethyl sulfoxide, glycerol, and sugars. However, the mechanism responsible for the effectiveness of these cryoprotectants is poorly understood on the molecular level. The water replacement model predicts that water molecules around the surfaces of proteins are replaced with sugar molecules, forming a protective layer against the denaturing ice formation. Under this scheme, one would expect an increase in the hydrodynamic radius with increasing sugar concentration. In order to test this hypothesis, two-photon fluorescence correlation spectroscopy (FCS) was used to measure the hydrodynamic radius of hemocyanin (Hc), an oxygen-carrying protein found in arthropods, in glucose solutions up to 20wt%. FCS found that the hydrodynamic radius was invariant with increasing glucose concentration. Dynamic light scattering (DLS) results verified the hydrodynamic radius of hemocyanin in the absence of glucose. Although this invariant trend seems to indicate that the water replacement hypothesis is invalid the expected glucose layer around the Hc is smaller than the error in the hydrodynamic radius measurements for FCS. The expected change in the hydrodynamic radius with an additional layer of glucose is 1nm, however, the FCS standard error is +/-3.61nm. Therefore, the water replacement model cannot be confirmed nor refuted as a possible explanation for the cryoprotective effects of glucose on Hc.

  15. Confidence intervals for single-case effect size measures based on randomization test inversion.

    PubMed

    Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick

    2017-02-01

    In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.

  16. Picture-Perfect Is Not Perfect for Metamemory: Testing the Perceptual Fluency Hypothesis with Degraded Images

    ERIC Educational Resources Information Center

    Besken, Miri

    2016-01-01

    The perceptual fluency hypothesis claims that items that are easy to perceive at encoding induce an illusion that they will be easier to remember, despite the finding that perception does not generally affect recall. The current set of studies tested the predictions of the perceptual fluency hypothesis with a picture generation manipulation.…

  17. Adolescents' Body Image Trajectories: A Further Test of the Self-Equilibrium Hypothesis

    ERIC Educational Resources Information Center

    Morin, Alexandre J. S.; Maïano, Christophe; Scalas, L. Francesca; Janosz, Michel; Litalien, David

    2017-01-01

    The self-equilibrium hypothesis underlines the importance of having a strong core self, which is defined as a high and developmentally stable self-concept. This study tested this hypothesis in relation to body image (BI) trajectories in a sample of 1,006 adolescents (M[subscript age] = 12.6, including 541 males and 465 females) across a 4-year…

  18. Using the Coefficient of Confidence to Make the Philosophical Switch from a Posteriori to a Priori Inferential Statistics

    ERIC Educational Resources Information Center

    Trafimow, David

    2017-01-01

    There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…

  19. Does Merit-Based Aid Improve College Affordability? Testing the Bennett Hypothesis in the Era of Merit-Based Aid

    ERIC Educational Resources Information Center

    Lee, Jungmin

    2016-01-01

    This study tested the Bennett hypothesis by examining whether four-year colleges changed listed tuition and fees, the amount of institutional grants per student, and room and board charges after their states implemented statewide merit-based aid programs. According to the Bennett hypothesis, increases in government financial aid make it easier for…

  20. Human female orgasm as evolved signal: a test of two hypotheses.

    PubMed

    Ellsworth, Ryan M; Bailey, Drew H

    2013-11-01

    We present the results of a study designed to empirically test predictions derived from two hypotheses regarding human female orgasm behavior as an evolved communicative trait or signal. One hypothesis tested was the female fidelity hypothesis, which posits that human female orgasm signals a woman's sexual satisfaction and therefore her likelihood of future fidelity to a partner. The other was sire choice hypothesis, which posits that women's orgasm behavior signals increased chances of fertilization. To test the two hypotheses of human female orgasm, we administered a questionnaire to 138 females and 121 males who reported that they were currently in a romantic relationship. Key predictions of the female fidelity hypothesis were not supported. In particular, orgasm was not associated with female sexual fidelity nor was orgasm associated with male perceptions of partner sexual fidelity. However, faked orgasm was associated with female sexual infidelity and lower male relationship satisfaction. Overall, results were in greater support of the sire choice signaling hypothesis than the female fidelity hypothesis. Results also suggest that male satisfaction with, investment in, and sexual fidelity to a mate are benefits that favored the selection of orgasmic signaling in ancestral females.

  1. Sex-Biased Parental Investment among Contemporary Chinese Peasants: Testing the Trivers-Willard Hypothesis.

    PubMed

    Luo, Liqun; Zhao, Wei; Weng, Tangmei

    2016-01-01

    The Trivers-Willard hypothesis predicts that high-status parents will bias their investment to sons, whereas low-status parents will bias their investment to daughters. Among humans, tests of this hypothesis have yielded mixed results. This study tests the hypothesis using data collected among contemporary peasants in Central South China. We use current family status (rated by our informants) and father's former class identity (assigned by the Chinese Communist Party in the early 1950s) as measures of parental status, and proportion of sons in offspring and offspring's years of education as measures of parental investment. Results show that (i) those families with a higher former class identity such as landlord and rich peasant tend to have a higher socioeconomic status currently, (ii) high-status parents are more likely to have sons than daughters among their biological offspring, and (iii) in higher-status families, the years of education obtained by sons exceed that obtained by daughters to a larger extent than in lower-status families. Thus, the first assumption and the two predictions of the hypothesis are supported by this study. This article contributes a contemporary Chinese case to the testing of the Trivers-Willard hypothesis.

  2. Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.

    PubMed

    Ji, Ming; Xiong, Chengjie; Grundman, Michael

    2003-10-01

    In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.

  3. PSK Shift Timing Information Detection Using Image Processing and a Matched Filter

    DTIC Science & Technology

    2009-09-01

    phase shifts are enhanced.  Develop, design, and test the resulting phase shift identification scheme. xx  Develop, design, and test an optional...and the resulting phase shift identification algorithm is investigated for SNR levels in the range -2dB to 12 dB. Detection performances are derived...test the resulting phase shift identification scheme.  Develop, design, and test an optional analysis window overlapping technique to improve phase

  4. Detection of Undocumented Changepoints Using Multiple Test Statistics and Composite Reference Series.

    NASA Astrophysics Data System (ADS)

    Menne, Matthew J.; Williams, Claude N., Jr.

    2005-10-01

    An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.

  5. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMES IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...

  6. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMED IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  7. Reproductive technologies combine well with genomic selection in dairy breeding programs.

    PubMed

    Thomasen, J R; Willam, A; Egger-Danner, C; Sørensen, A C

    2016-02-01

    The objective of the present study was to examine whether genomic selection of females interacts with the use of reproductive technologies (RT) to increase annual monetary genetic gain (AMGG). This was tested using a factorial design with 3 factors: genomic selection of females (0 or 2,000 genotyped heifers per year), RT (0 or 50 donors selected at 14 mo of age for producing 10 offspring), and 2 reliabilities of genomic prediction. In addition, different strategies for use of RT and how strategies interact with the reliability of genomic prediction were investigated using stochastic simulation by varying (1) number of donors (25, 50, 100, 200), (2) number of calves born per donor (10 or 20), (3) age of donor (2 or 14 mo), and (4) number of sires (25, 50, 100, 200). In total, 72 different breeding schemes were investigated. The profitability of the different breeding strategies was evaluated by deterministic simulation by varying the costs of a born calf with reproductive technologies at levels of €500, €1,000, and €1,500. The results confirm our hypothesis that combining genomic selection of females with use of RT increases AMGG more than in a reference scheme without genomic selection in females. When the reliability of genomic prediction is high, the effect on rate of inbreeding (ΔF) is small. The study also demonstrates favorable interaction effects between the components of the breeder's equation (selection intensity, selection accuracy, generation interval) for the bull dam donor path, leading to higher AMGG. Increasing the donor program and number of born calves to achieve higher AMGG is associated with the undesirable effect of increased ΔF. This can be alleviated, however, by increasing the numbers of sires without compromising AMGG remarkably. For the major part of the investigated donor schemes, the investment in RT is profitable in dairy cattle populations, even at high levels of costs for RT. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. The impact of stakeholder values and power relations on community-based health insurance coverage: qualitative evidence from three Senegalese case studies.

    PubMed

    Mladovsky, Philipa; Ndiaye, Pascal; Ndiaye, Alfred; Criel, Bart

    2015-07-01

    Continued low rates of enrolment in community-based health insurance (CBHI) suggest that strategies proposed for scaling up are unsuccessfully implemented or inadequately address underlying limitations of CBHI. One reason may be a lack of incorporation of social and political context into CBHI policy. In this study, the hypothesis is proposed that values and power relations inherent in social networks of CBHI stakeholders can explain levels of CBHI coverage. To test this, three case studies constituting Senegalese CBHI schemes were studied. Transcripts of interviews with 64 CBHI stakeholders were analysed using inductive coding. The five most important themes pertaining to social values and power relations were: voluntarism, trust, solidarity, political engagement and social movements. Analysis of these themes raises a number of policy and implementation challenges for expanding CBHI coverage. First is the need to subsidize salaries for CBHI scheme staff. Second is the need to develop more sustainable internal and external governance structures through CBHI federations. Third is ensuring that CBHI resonates with local values concerning four dimensions of solidarity (health risk, vertical equity, scale and source). Government subsidies is one of the several potential strategies to achieve this. Fourth is the need for increased transparency in national policy. Fifth is the need for CBHI scheme leaders to increase their negotiating power vis-à-vis health service providers who control the resources needed for expanding CBHI coverage, through federations and a social movement dynamic. Systematically addressing all these challenges would represent a fundamental reform of the current CBHI model promoted in Senegal and in Africa more widely; this raises issues of feasibility in practice. From a theoretical perspective, the results suggest that studying values and power relations among stakeholders in multiple case studies is a useful complement to traditional health systems analysis. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2014; all rights reserved.

  9. Data Mining for Financial Applications

    NASA Astrophysics Data System (ADS)

    Kovalerchuk, Boris; Vityaev, Evgenii

    This chapter describes Data Mining in finance by discussing financial tasks, specifics of methodologies and techniques in this Data Mining area. It includes time dependence, data selection, forecast horizon, measures of success, quality of patterns, hypothesis evaluation, problem ID, method profile, attribute-based and relational methodologies. The second part of the chapter discusses Data Mining models and practice in finance. It covers use of neural networks in portfolio management, design of interpretable trading rules and discovering money laundering schemes using decision rules and relational Data Mining methodology.

  10. A simple, physically-based method for evaluating the economic costs of geo-engineering schemes

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2009-04-01

    The consumption of primary energy (e.g coal, oil, uranium) by the global economy is done in expectation of a return on investment. For geo-engineering schemes, however, the relationship between the primary energy consumption required and the economic return is, at first glance, quite different. The energy costs of a given scheme represent a removal of economically productive available energy to do work in the normal global economy. What are the economic implications of the energy consumption associated with geo-engineering techniques? I will present a simple thermodynamic argument that, in general, real (inflation-adjusted) economic value has a fixed relationship to the rate of global primary energy consumption. This hypothesis will be shown to be supported by 36 years of available energy statistics and a two millennia period of statistics for global economic production. What is found from this analysis is that the value in any given inflation-adjusted 1990 dollar is sustained by a constant 9.7 +/- 0.3 milliwatts of global primary energy consumption. Thus, insofar as geo-engineering is concerned, any scheme that requires some nominal fraction of continuous global primary energy output necessitates a corresponding inflationary loss of real global economic value. For example, if 1% of global energy output is required, at today's consumption rates of 15 TW this corresponds to an inflationary loss of 15 trillion 1990 dollars of real value. The loss will be less, however, if the geo-engineering scheme also enables a demonstrable enhancement to global economic production capacity through climate modification.

  11. Bayesian Methods for Determining the Importance of Effects

    USDA-ARS?s Scientific Manuscript database

    Criticisms have plagued the frequentist null-hypothesis significance testing (NHST) procedure since the day it was created from the Fisher Significance Test and Hypothesis Test of Jerzy Neyman and Egon Pearson. Alternatives to NHST exist in frequentist statistics, but competing methods are also avai...

  12. Immersion freezing of supermicron mineral dust particles: freezing results, testing different schemes for describing ice nucleation, and ice nucleation active site densities.

    PubMed

    Wheeler, M J; Mason, R H; Steunenberg, K; Wagstaff, M; Chou, C; Bertram, A K

    2015-05-14

    Ice nucleation on mineral dust particles is known to be an important process in the atmosphere. To accurately implement ice nucleation on mineral dust particles in atmospheric simulations, a suitable theory or scheme is desirable to describe laboratory freezing data in atmospheric models. In the following, we investigated ice nucleation by supermicron mineral dust particles [kaolinite and Arizona Test Dust (ATD)] in the immersion mode. The median freezing temperature for ATD was measured to be approximately -30 °C compared with approximately -36 °C for kaolinite. The freezing results were then used to test four different schemes previously used to describe ice nucleation in atmospheric models. In terms of ability to fit the data (quantified by calculating the reduced chi-squared values), the following order was found for ATD (from best to worst): active site, pdf-α, deterministic, single-α. For kaolinite, the following order was found (from best to worst): active site, deterministic, pdf-α, single-α. The variation in the predicted median freezing temperature per decade change in the cooling rate for each of the schemes was also compared with experimental results from other studies. The deterministic model predicts the median freezing temperature to be independent of cooling rate, while experimental results show a weak dependence on cooling rate. The single-α, pdf-α, and active site schemes all agree with the experimental results within roughly a factor of 2. On the basis of our results and previous results where different schemes were tested, the active site scheme is recommended for describing the freezing of ATD and kaolinite particles. We also used our ice nucleation results to determine the ice nucleation active site (INAS) density for the supermicron dust particles tested. Using the data, we show that the INAS densities of supermicron kaolinite and ATD particles studied here are smaller than the INAS densities of submicron kaolinite and ATD particles previously reported in the literature.

  13. Testing for purchasing power parity in the long-run for ASEAN-5

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-04-01

    For more than a decade, there has been a substantial interest in testing for the validity of the purchasing power parity (PPP) hypothesis empirically. This paper performs a test on revealing a long-run relative Purchasing Power Parity for a group of ASEAN-5 countries for the period of 1996-2016 using monthly data. For this purpose, we used the Pedroni co-integration method to test for the long-run hypothesis of purchasing power parity. We first tested for the stationarity of the variables and found that the variables are non-stationary at levels but stationary at first difference. Results of the Pedroni test rejected the null hypothesis of no co-integration meaning that we have enough evidence to support PPP in the long-run for the ASEAN-5 countries over the period of 1996-2016. In other words, the rejection of null hypothesis implies a long-run relation between nominal exchange rates and relative prices.

  14. UNIFORMLY MOST POWERFUL BAYESIAN TESTS

    PubMed Central

    Johnson, Valen E.

    2014-01-01

    Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829

  15. [Experimental testing of Pflüger's reflex hypothesis of menstruation in late 19th century].

    PubMed

    Simmer, H H

    1980-07-01

    Pflüger's hypothesis of a nerve reflex as the cause of menstruation published in 1865 and accepted by many, nonetheless did not lead to experimental investigations for 25 years. According to this hypothesis the nerve reflex starts in the ovary by an increase of the intraovarian pressure by the growing follicles. In 1884 Adolph Kehrer proposed a program to test the nerve reflex, but only in 1890, Cohnstein artificially increased the intraovarian pressure in women by bimanual compression from the outside and the vagina. His results were not convincing. Six years later, Strassmann injected fluids into ovaries of animals and obtained changes in the uterus resembling those of oestrus. His results seemed to verify a prognosis derived from Pflüger's hypothesis. Thus, after a long interval, that hypothesis had become a paradigma. Though reasons can be given for the delay, it is little understood, why experimental testing started so late.

  16. Numerical experiments on the accuracy of ENO and modified ENO schemes

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1990-01-01

    Further numerical experiments are made assessing an accuracy degeneracy phenomena. A modified essentially non-oscillatory (ENO) scheme is proposed, which recovers the correct order of accuracy for all the test problems with smooth initial conditions and gives comparable results with the original ENO schemes for discontinuous problems.

  17. Perception as a closed-loop convergence process.

    PubMed

    Ahissar, Ehud; Assa, Eldad

    2016-05-09

    Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception.

  18. Perception as a closed-loop convergence process

    PubMed Central

    Ahissar, Ehud; Assa, Eldad

    2016-01-01

    Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception. DOI: http://dx.doi.org/10.7554/eLife.12830.001 PMID:27159238

  19. When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.

    PubMed

    Szucs, Denes; Ioannidis, John P A

    2017-01-01

    Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.

  20. When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment

    PubMed Central

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397

  1. Testing fundamental ecological concepts with a Pythium-Prunus pathosystem

    USDA-ARS?s Scientific Manuscript database

    The study of plant-pathogen interactions has enabled tests of basic ecological concepts on plant community assembly (Janzen-Connell Hypothesis) and plant invasion (Enemy Release Hypothesis). We used a field experiment to (#1) test whether Pythium effects depended on host (seedling) density and/or d...

  2. A checklist to facilitate objective hypothesis testing in social psychology research.

    PubMed

    Washburn, Anthony N; Morgan, G Scott; Skitka, Linda J

    2015-01-01

    Social psychology is not a very politically diverse area of inquiry, something that could negatively affect the objectivity of social psychological theory and research, as Duarte et al. argue in the target article. This commentary offers a number of checks to help researchers uncover possible biases and identify when they are engaging in hypothesis confirmation and advocacy instead of hypothesis testing.

  3. Testing the stress-gradient hypothesis during the restoration of tropical degraded land using the shrub Rhodomyrtus tomentosa as a nurse plant

    Treesearch

    Nan Liu; Hai Ren; Sufen Yuan; Qinfeng Guo; Long Yang

    2013-01-01

    The relative importance of facilitation and competition between pairwise plants across abiotic stress gradients as predicted by the stress-gradient hypothesis has been confirmed in arid and temperate ecosystems, but the hypothesis has rarely been tested in tropical systems, particularly across nutrient gradients. The current research examines the interactions between a...

  4. Phase II Clinical Trials: D-methionine to Reduce Noise-Induced Hearing Loss

    DTIC Science & Technology

    2012-03-01

    loss (NIHL) and tinnitus in our troops. Hypotheses: Primary Hypothesis: Administration of oral D-methionine prior to and during weapons...reduce or prevent noise-induced tinnitus . Primary outcome to test the primary hypothesis: Pure tone air-conduction thresholds. Primary outcome to...test the secondary hypothesis: Tinnitus questionnaires. Specific Aims: 1. To determine whether administering oral D-methionine (D-met) can

  5. An omnibus test for the global null hypothesis.

    PubMed

    Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja

    2018-01-01

    Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.

  6. Method for digital measurement of phase-frequency characteristics for a fixed-length ultrasonic spectrometer

    NASA Astrophysics Data System (ADS)

    Astashev, M. E.; Belosludtsev, K. N.; Kharakoz, D. P.

    2014-05-01

    One of the most accurate methods for measuring the compressibility of liquids is resonance measurement of sound velocity in a fixed-length interferometer. This method combines high sensitivity, accuracy, and small sample volume of the test liquid. The measuring principle is to study the resonance properties of a composite resonator that contains a test liquid sample. Ealier, the phase-locked loop (PLL) scheme was used for this. In this paper, we propose an alternative measurement scheme based on digital analysis of harmonic signals, describe the implementation of this scheme using commercially available data acquisition modules, and give examples of test measurements with accuracy evaluations of the results.

  7. Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests

    NASA Astrophysics Data System (ADS)

    Toth, G.; Keppens, R.; Botchev, M. A.

    1998-04-01

    We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.

  8. Explorations in Statistics: Hypothesis Tests and P Values

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…

  9. Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment

    ERIC Educational Resources Information Center

    Frane, Andrew V.

    2015-01-01

    Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…

  10. Rugby versus Soccer in South Africa: Content Familiarity Contributes to Cross-Cultural Differences in Cognitive Test Scores

    ERIC Educational Resources Information Center

    Malda, Maike; van de Vijver, Fons J. R.; Temane, Q. Michael

    2010-01-01

    In this study, cross-cultural differences in cognitive test scores are hypothesized to depend on a test's cultural complexity (Cultural Complexity Hypothesis: CCH), here conceptualized as its content familiarity, rather than on its cognitive complexity (Spearman's Hypothesis: SH). The content familiarity of tests assessing short-term memory,…

  11. Synthetic turbulence

    NASA Astrophysics Data System (ADS)

    Juneja, A.; Lathrop, D. P.; Sreenivasan, K. R.; Stolovitzky, G.

    1994-06-01

    A family of schemes is outlined for constructing stochastic fields that are close to turbulence. The fields generated from the more sophisticated versions of these schemes differ little in terms of one-point and two-point statistics from velocity fluctuations in high-Reynolds-number turbulence; we shall designate such fields as synthetic turbulence. All schemes, implemented here in one dimension, consist of the following three ingredients, but differ in various details. First, a simple multiplicative procedure is utilized for generating an intermittent signal which has the same properties as those of the turbulent energy dissipation rate ɛ. Second, the properties of the intermittent signal averaged over an interval of size r are related to those of longitudinal velocity increments Δu(r), evaluated over the same distance r, through a stochastic variable V introduced in the spirit of Kolmogorov's refined similarity hypothesis. The third and final step, which partially resembles a well-known procedure for constructing fractional Brownian motion, consists of suitably combining velocity increments to construct an artificial velocity signal. Various properties of the synthetic turbulence are obtained both analytically and numerically, and found to be in good agreement with measurements made in the atmospheric surface layer. A brief review of some previous models is provided.

  12. A new approach to the convective parameterization of the regional atmospheric model BRAMS

    NASA Astrophysics Data System (ADS)

    Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.

    2013-05-01

    The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.

  13. Prospective Tests of Southern California Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.

    2004-12-01

    We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we estimate by simulations. In this scheme, each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results will be archived and posted on the RELM web site. Major problems under discussion include how to treat aftershocks, which clearly violate the variable-rate Poissonian hypotheses that we employ, and how to deal with the temporal variations in catalog completeness that follow large earthquakes.

  14. Is it better to select or to receive? Learning via active and passive hypothesis testing.

    PubMed

    Markant, Douglas B; Gureckis, Todd M

    2014-02-01

    People can test hypotheses through either selection or reception. In a selection task, the learner actively chooses observations to test his or her beliefs, whereas in reception tasks data are passively encountered. People routinely use both forms of testing in everyday life, but the critical psychological differences between selection and reception learning remain poorly understood. One hypothesis is that selection learning improves learning performance by enhancing generic cognitive processes related to motivation, attention, and engagement. Alternatively, we suggest that differences between these 2 learning modes derives from a hypothesis-dependent sampling bias that is introduced when a person collects data to test his or her own individual hypothesis. Drawing on influential models of sequential hypothesis-testing behavior, we show that such a bias (a) can lead to the collection of data that facilitates learning compared with reception learning and (b) can be more effective than observing the selections of another person. We then report a novel experiment based on a popular category learning paradigm that compares reception and selection learning. We additionally compare selection learners to a set of "yoked" participants who viewed the exact same sequence of observations under reception conditions. The results revealed systematic differences in performance that depended on the learner's role in collecting information and the abstract structure of the problem.

  15. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  16. Implementation analysis of RC5 algorithm on Preneel-Govaerts-Vandewalle (PGV) hashing schemes using length extension attack

    NASA Astrophysics Data System (ADS)

    Siswantyo, Sepha; Susanti, Bety Hayat

    2016-02-01

    Preneel-Govaerts-Vandewalle (PGV) schemes consist of 64 possible single-block-length schemes that can be used to build a hash function based on block ciphers. For those 64 schemes, Preneel claimed that 4 schemes are secure. In this paper, we apply length extension attack on those 4 secure PGV schemes which use RC5 algorithm in its basic construction to test their collision resistance property. The attack result shows that the collision occurred on those 4 secure PGV schemes. Based on the analysis, we indicate that Feistel structure and data dependent rotation operation in RC5 algorithm, XOR operations on the scheme, along with selection of additional message block value also give impact on the collision to occur.

  17. Four-level conservative finite-difference schemes for Boussinesq paradigm equation

    NASA Astrophysics Data System (ADS)

    Kolkovska, N.

    2013-10-01

    In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.

  18. Testing for purchasing power parity in 21 African countries using several unit root tests

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-04-01

    Purchasing power parity is used as a basis for international income and expenditure comparison through the exchange rate theory. However, empirical studies show disagreement on the validity of PPP. In this paper, we conduct the testing on the validity of PPP using panel data approach. We apply seven different panel unit root tests to test the validity of the purchasing power parity (PPP) hypothesis based on the quarterly data on real effective exchange rate for 21 African countries from the period 1971: Q1-2012: Q4. All the results of the seven tests rejected the hypothesis of stationarity meaning that absolute PPP does not hold in those African Countries. This result confirmed the claim from previous studies that standard panel unit tests fail to support the PPP hypothesis.

  19. Does Testing Increase Spontaneous Mediation in Learning Semantically Related Paired Associates?

    ERIC Educational Resources Information Center

    Cho, Kit W.; Neely, James H.; Brennan, Michael K.; Vitrano, Deana; Crocco, Stephanie

    2017-01-01

    Carpenter (2011) argued that the testing effect she observed for semantically related but associatively unrelated paired associates supports the mediator effectiveness hypothesis. This hypothesis asserts that after the cue-target pair "mother-child" is learned, relative to restudying mother-child, a review test in which…

  20. An alternate lining scheme for solar ponds - Results of a liner test rig

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raman, P.; Kishore, V.V.N.

    1990-01-01

    Solar pond lining schemes consisting of combinations of clays and Low Density Polyethylene (LDPE) films have been experimentally evaluated by means of a Solar Pond Liner Test Rig. Results indicate that LDPE film sandwiched between two layers of clay can be effectively used for lining solar ponds.

  1. Dispersion-relation-preserving finite difference schemes for computational acoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Webb, Jay C.

    1993-01-01

    Time-marching dispersion-relation-preserving (DRP) schemes can be constructed by optimizing the finite difference approximations of the space and time derivatives in wave number and frequency space. A set of radiation and outflow boundary conditions compatible with the DRP schemes is constructed, and a sequence of numerical simulations is conducted to test the effectiveness of the DRP schemes and the radiation and outflow boundary conditions. Close agreement with the exact solutions is obtained.

  2. The importance of lake-specific characteristics for water quality across the continental United States.

    PubMed

    Read, Emily K; Patil, Vijay P; Oliver, Samantha K; Hetherington, Amy L; Brentrup, Jennifer A; Zwart, Jacob A; Winters, Kirsten M; Corman, Jessica R; Nodine, Emily R; Woolway, R Iestyn; Dugan, Hilary A; Jaimes, Aline; Santoso, Arianto B; Hong, Grace S; Winslow, Luke A; Hanson, Paul C; Weathers, Kathleen C

    2015-06-01

    Lake water quality is affected by local and regional drivers, including lake physical characteristics, hydrology, landscape position, land cover, land use, geology, and climate. Here, we demonstrate the utility of hypothesis testing within the landscape limnology framework using a random forest algorithm on a national-scale, spatially explicit data set, the United States Environmental Protection Agency's 2007 National Lakes Assessment. For 1026 lakes, we tested the relative importance of water quality drivers across spatial scales, the importance of hydrologic connectivity in mediating water quality drivers, and how the importance of both spatial scale and connectivity differ across response variables for five important in-lake water quality metrics (total phosphorus, total nitrogen, dissolved organic carbon, turbidity, and conductivity). By modeling the effect of water quality predictors at different spatial scales, we found that lake-specific characteristics (e.g., depth, sediment area-to-volume ratio) were important for explaining water quality (54-60% variance explained), and that regionalization schemes were much less effective than lake specific metrics (28-39% variance explained). Basin-scale land use and land cover explained between 45-62% of variance, and forest cover and agricultural land uses were among the most important basin-scale predictors. Water quality drivers did not operate independently; in some cases, hydrologic connectivity (the presence of upstream surface water features) mediated the effect of regional-scale drivers. For example, for water quality in lakes with upstream lakes, regional classification schemes were much less effective predictors than lake-specific variables, in contrast to lakes with no upstream lakes or with no surface inflows. At the scale of the continental United States, conductivity was explained by drivers operating at larger spatial scales than for other water quality responses. The current regulatory practice of using regionalization schemes to guide water quality criteria could be improved by consideration of lake-specific characteristics, which were the most important predictors of water quality at the scale of the continental United States. The spatial extent and high quality of contextual data available for this analysis makes this work an unprecedented application of landscape limnology theory to water quality data. Further, the demonstrated importance of lake morphology over other controls on water quality is relevant to both aquatic scientists and managers.

  3. [A test of the focusing hypothesis for category judgment: an explanation using the mental-box model].

    PubMed

    Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi

    2011-06-01

    This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.

  4. Enhancing Vocabulary Acquisition through Reading: A Hierarchy of Text-Related Exercise Types.

    ERIC Educational Resources Information Center

    Wesche, M.; Paribakht, T. Sima

    This paper describes a classification scheme developed to examine the effects of extensive reading on primary and second language vocabulary acquisition and reports on an experiment undertaken to test the model scheme. The classification scheme represents a hypothesized hierarchy of the degree and type of mental processing required by various…

  5. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    In this thesis, we analyze various factors that affect quality of service (QoS) communication in high-speed, packet-switching sub-networks. We hypothesize that sub-network-wide bandwidth reservation and guaranteed CPU processing power at endpoint systems for handling data traffic are indispensable to achieving hard end-to-end quality of service. Different bandwidth reservation strategies, traffic characterization schemes, and scheduling algorithms affect the network resources and CPU usage as well as the extent that QoS can be achieved. In order to analyze those factors, we design and implement a communication layer. Our experimental analysis supports our research hypothesis. The Resource ReSerVation Protocol (RSVP) is designed to realize resource reservation. Our analysis of RSVP shows that using RSVP solely is insufficient to provide hard end-to-end quality of service in a high-speed sub-network. Analysis of the IEEE 802.lp protocol also supports the research hypothesis.

  6. Formation stability analysis of unmanned multi-vehicles under interconnection topologies

    NASA Astrophysics Data System (ADS)

    Yang, Aolei; Naeem, Wasif; Fei, Minrui

    2015-04-01

    In this paper, the overall formation stability of an unmanned multi-vehicle is mathematically presented under interconnection topologies. A novel definition of formation error is first given and followed by the proposed formation stability hypothesis. Based on this hypothesis, a unique extension-decomposition-aggregation scheme is then employed to support the stability analysis for the overall multi-vehicle formation under a mesh topology. It is proved that the overall formation control system consisting of N number of nonlinear vehicles is not only asymptotically stable, but also exponentially stable in the sense of Lyapunov within a neighbourhood of the desired formation. This technique is shown to be applicable for a mesh topology but is equally applicable for other topologies. A simulation study of the formation manoeuvre of multiple Aerosonde UAVs (unmanned aerial vehicles), in 3-D space, is finally carried out verifying the achieved formation stability result.

  7. Role of small oligomers on the amyloidogenic aggregation free-energy landscape.

    PubMed

    He, Xianglan; Giurleo, Jason T; Talaga, David S

    2010-01-08

    We combine atomic-force-microscopy particle-size-distribution measurements with earlier measurements on 1-anilino-8-naphthalene sulfonate, thioflavin T, and dynamic light scattering to develop a quantitative kinetic model for the aggregation of beta-lactoglobulin into amyloid. We directly compare our simulations to the population distributions provided by dynamic light scattering and atomic force microscopy. We combine species in the simulation according to structural type for comparison with fluorescence fingerprint results. The kinetic model of amyloidogenesis leads to an aggregation free-energy landscape. We define the roles of and propose a classification scheme for different oligomeric species based on their location in the aggregation free-energy landscape. We relate the different types of oligomers to the amyloid cascade hypothesis and the toxic oligomer hypothesis for amyloid-related diseases. We discuss existing kinetic mechanisms in terms of the different types of oligomers. We provide a possible resolution to the toxic oligomer-amyloid coincidence.

  8. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    PubMed

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  9. Review on how proficiency testing needs in Brazil are supplied by accredited providers by Cgcre

    NASA Astrophysics Data System (ADS)

    Moura, M. H.; Borges, R. M. H.

    2015-01-01

    Proficiency testing schemes are an important tool to quality assurance in measurement as well as a tool to harmonization of multilateral recognition arrangements for accreditation. The General Coordination for Accreditation (Cgcre) of INMETRO developed a new program to accredit proficiency testing providers according with the International Standard ISO/IEC 17043. This work presents a review on needs for proficiency testing schemes in Brazil and assesses how these needs are supplied by accredited providers.

  10. Semi-implicit iterative methods for low Mach number turbulent reacting flows: Operator splitting versus approximate factorization

    NASA Astrophysics Data System (ADS)

    MacArt, Jonathan F.; Mueller, Michael E.

    2016-12-01

    Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.

  11. External quality assurance of HER2 fluorescence in situ hybridisation testing: results of a UK NEQAS pilot scheme.

    PubMed

    Bartlett, John M S; Ibrahim, Merdol; Jasani, Bharat; Morgan, John M; Ellis, Ian; Kay, Elaine; Magee, Hilary; Barnett, Sarah; Miller, Keith

    2007-07-01

    Trastuzumab provides clinical benefit for advanced and early breast cancer patients whose tumours over-express or have gene amplification of the HER2 oncogene. The UK National External Quality Assessment Scheme (NEQAS) for immunohistochemical testing was established to assess and improve the quality of HER2 immunohistochemical testing. However, until recently, no provision was available for HER2 fluorescence in situ hybridisation (FISH) testing. A pilot scheme was set up to review the performance of FISH testing in clinical diagnostic laboratories. FISH was performed in 6 reference and 31 participating laboratories using a cell line panel with known HER2 status. Using results from reference laboratories as a criterion for acceptable performance, 60% of all results returned by participants were appropriate and 78% either appropriate or acceptable. However, 22.4% of results returned were deemed inappropriate, including 13 cases (4.2%) where a misdiagnosis would have been made had these been clinical specimens. The results of three consecutive runs show that both reference laboratories and a proportion of routine clinical diagnostic (about 25%) centres can consistently achieve acceptable quality control of HER2 testing. Data from a significant proportion of participating laboratories show that further steps are required, including those taken via review of performance under schemes such as NEQAS, to improve quality of HER2 testing by FISH in the "real world".

  12. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Fisher information framework for time series modeling

    NASA Astrophysics Data System (ADS)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  14. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  15. Debates—Hypothesis testing in hydrology: Introduction

    NASA Astrophysics Data System (ADS)

    Blöschl, Günter

    2017-03-01

    This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.

  16. Reasoning Maps: A Generally Applicable Method for Characterizing Hypothesis-Testing Behaviour. Research Report

    ERIC Educational Resources Information Center

    White, Brian

    2004-01-01

    This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…

  17. Why Is Test-Restudy Practice Beneficial for Memory? An Evaluation of the Mediator Shift Hypothesis

    ERIC Educational Resources Information Center

    Pyc, Mary A.; Rawson, Katherine A.

    2012-01-01

    Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness…

  18. Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation

    ERIC Educational Resources Information Center

    Ross, Steven J.; Mackey, Beth

    2015-01-01

    This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…

  19. Distrust and the positive test heuristic: dispositional and situated social distrust improves performance on the Wason rule discovery task.

    PubMed

    Mayo, Ruth; Alfasi, Dana; Schwarz, Norbert

    2014-06-01

    Feelings of distrust alert people not to take information at face value, which may influence their reasoning strategy. Using the Wason (1960) rule identification task, we tested whether chronic and temporary distrust increase the use of negative hypothesis testing strategies suited to falsify one's own initial hunch. In Study 1, participants who were low in dispositional trust were more likely to engage in negative hypothesis testing than participants high in dispositional trust. In Study 2, trust and distrust were induced through an alleged person-memory task. Paralleling the effects of chronic distrust, participants exposed to a single distrust-eliciting face were 3 times as likely to engage in negative hypothesis testing as participants exposed to a trust-eliciting face. In both studies, distrust increased negative hypothesis testing, which was associated with better performance on the Wason task. In contrast, participants' initial rule generation was not consistently affected by distrust. These findings provide first evidence that distrust can influence which reasoning strategy people adopt. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  20. Automated detection and quantification of residual brain tumor using an interactive computer-aided detection scheme

    NASA Astrophysics Data System (ADS)

    Gaffney, Kevin P.; Aghaei, Faranak; Battiste, James; Zheng, Bin

    2017-03-01

    Detection of residual brain tumor is important to evaluate efficacy of brain cancer surgery, determine optimal strategy of further radiation therapy if needed, and assess ultimate prognosis of the patients. Brain MR is a commonly used imaging modality for this task. In order to distinguish between residual tumor and surgery induced scar tissues, two sets of MRI scans are conducted pre- and post-gadolinium contrast injection. The residual tumors are only enhanced in the post-contrast injection images. However, subjective reading and quantifying this type of brain MR images faces difficulty in detecting real residual tumor regions and measuring total volume of the residual tumor. In order to help solve this clinical difficulty, we developed and tested a new interactive computer-aided detection scheme, which consists of three consecutive image processing steps namely, 1) segmentation of the intracranial region, 2) image registration and subtraction, 3) tumor segmentation and refinement. The scheme also includes a specially designed and implemented graphical user interface (GUI) platform. When using this scheme, two sets of pre- and post-contrast injection images are first automatically processed to detect and quantify residual tumor volume. Then, a user can visually examine segmentation results and conveniently guide the scheme to correct any detection or segmentation errors if needed. The scheme has been repeatedly tested using five cases. Due to the observed high performance and robustness of the testing results, the scheme is currently ready for conducting clinical studies and helping clinicians investigate the association between this quantitative image marker and outcome of patients.

  1. The terminator "toy" chemistry test: A simple tool to assess errors in transport schemes

    DOE PAGES

    Lauritzen, P. H.; Conley, A. J.; Lamarque, J. -F.; ...

    2015-05-04

    This test extends the evaluation of transport schemes from prescribed advection of inert scalars to reactive species. The test consists of transporting two interacting chemical species in the Nair and Lauritzen 2-D idealized flow field. The sources and sinks for these two species are given by a simple, but non-linear, "toy" chemistry that represents combination (X+X → X 2) and dissociation (X 2 → X+X). This chemistry mimics photolysis-driven conditions near the solar terminator, where strong gradients in the spatial distribution of the species develop near its edge. Despite the large spatial variations in each species, the weighted sum Xmore » T = X+2X 2 should always be preserved at spatial scales at which molecular diffusion is excluded. The terminator test demonstrates how well the advection–transport scheme preserves linear correlations. Chemistry–transport (physics–dynamics) coupling can also be studied with this test. Examples of the consequences of this test are shown for illustration.« less

  2. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1993-01-01

    The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.

  3. An Improved Flame Test for Qualitative Analysis Using a Multichannel UV-Visible Spectrophotometer

    ERIC Educational Resources Information Center

    Blitz, Jonathan P.; Sheeran, Daniel J.; Becker, Thomas L.

    2006-01-01

    Qualitative analysis schemes are used in undergraduate laboratory settings as a way to introduce equilibrium concepts and logical thinking. The main component of all qualitative analysis schemes is a flame test, as the color of light emitted from certain elements is distinctive and a flame photometer or spectrophotometer in each laboratory is…

  4. In Defense of the Play-Creativity Hypothesis

    ERIC Educational Resources Information Center

    Silverman, Irwin W.

    2016-01-01

    The hypothesis that pretend play facilitates the creative thought process in children has received a great deal of attention. In a literature review, Lillard et al. (2013, p. 8) concluded that the evidence for this hypothesis was "not convincing." This article focuses on experimental and training studies that have tested this hypothesis.…

  5. The frequentist implications of optional stopping on Bayesian hypothesis tests.

    PubMed

    Sanborn, Adam N; Hills, Thomas T

    2014-04-01

    Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.

  6. Self-Monitoring Symptoms in Glaucoma: A Feasibility Study of a Web-Based Diary Tool

    PubMed Central

    McDonald, Leanne; Glen, Fiona C.; Taylor, Deanna J.

    2017-01-01

    Purpose. Glaucoma patients annually spend only a few hours in an eye clinic but spend more than 5000 waking hours engaged in everything else. We propose that patients could self-monitor changes in visual symptoms providing valuable between clinic information; we test the hypothesis that this is feasible using a web-based diary tool. Methods. Ten glaucoma patients with a range of visual field loss took part in an eight-week pilot study. After completing a series of baseline tests, volunteers were prompted to monitor symptoms every three days and complete a diary about their vision during daily life using a bespoke web-based diary tool. Response to an end of a study questionnaire about the usefulness of the exercise was a main outcome measure. Results. Eight of the 10 patients rated the monitoring scheme to be “valuable” or “very valuable.” Completion rate to items was excellent (96%). Themes from a qualitative synthesis of the diary entries related to behavioural aspects of glaucoma. One patient concluded that a constant focus on monitoring symptoms led to negative feelings. Conclusions. A web-based diary tool for monitoring self-reported glaucoma symptoms is practically feasible. The tool must be carefully designed to ensure participants are benefitting, and it is not increasing anxiety. PMID:28546876

  7. Ecology and geography of avian influenza (HPAI H5N1) transmission in the Middle East and northeastern Africa

    PubMed Central

    Williams, Richard AJ; Peterson, A Townsend

    2009-01-01

    Background The emerging highly pathogenic avian influenza strain H5N1 ("HPAI-H5N1") has spread broadly in the past decade, and is now the focus of considerable concern. We tested the hypothesis that spatial distributions of HPAI-H5N1 cases are related consistently and predictably to coarse-scale environmental features in the Middle East and northeastern Africa. We used ecological niche models to relate virus occurrences to 8 km resolution digital data layers summarizing parameters of monthly surface reflectance and landform. Predictive challenges included a variety of spatial stratification schemes in which models were challenged to predict case distributions in broadly unsampled areas. Results In almost all tests, HPAI-H5N1 cases were indeed occurring under predictable sets of environmental conditions, generally predicted absent from areas with low NDVI values and minimal seasonal variation, and present in areas with a broad range of and appreciable seasonal variation in NDVI values. Although we documented significant predictive ability of our models, even between our study region and West Africa, case occurrences in the Arabian Peninsula appear to follow a distinct environmental regime. Conclusion Overall, we documented a variable environmental "fingerprint" for areas suitable for HPAI-H5N1 transmission. PMID:19619336

  8. Correntropy-based partial directed coherence for testing multivariate Granger causality in nonlinear processes

    NASA Astrophysics Data System (ADS)

    Kannan, Rohit; Tangirala, Arun K.

    2014-06-01

    Identification of directional influences in multivariate systems is of prime importance in several applications of engineering and sciences such as plant topology reconstruction, fault detection and diagnosis, and neurosciences. A spectrum of related directionality measures, ranging from linear measures such as partial directed coherence (PDC) to nonlinear measures such as transfer entropy, have emerged over the past two decades. The PDC-based technique is simple and effective, but being a linear directionality measure has limited applicability. On the other hand, transfer entropy, despite being a robust nonlinear measure, is computationally intensive and practically implementable only for bivariate processes. The objective of this work is to develop a nonlinear directionality measure, termed as KPDC, that possesses the simplicity of PDC but is still applicable to nonlinear processes. The technique is founded on a nonlinear measure called correntropy, a recently proposed generalized correlation measure. The proposed method is equivalent to constructing PDC in a kernel space where the PDC is estimated using a vector autoregressive model built on correntropy. A consistent estimator of the KPDC is developed and important theoretical results are established. A permutation scheme combined with the sequential Bonferroni procedure is proposed for testing hypothesis on absence of causality. It is demonstrated through several case studies that the proposed methodology effectively detects Granger causality in nonlinear processes.

  9. Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.

    1988-01-01

    The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.

  10. Examination of Secondary School Seventh Graders' Proof Skills and Proof Schemes

    ERIC Educational Resources Information Center

    Sen, Ceylan; Guler, Gursel

    2015-01-01

    The aim of this study is to examine current proof making skills of secondary school seventh graders using proof schemes. Data of the study were collected in two phases. Initially, Proof Schemes Test, which was developed by the researchers, was administrated to 250 seventh grade students from eight secondary schools, which were chosen randomly. The…

  11. Metabolic Plasticity in Cancer Cells: Reconnecting Mitochondrial Function to Cancer Control

    PubMed Central

    Ramanujan, V. Krishnan

    2015-01-01

    Anomalous increase in glycolytic activity defines one of the key metabolic alterations in cancer cells. A realization of this feature has led to critical advancements in cancer detection techniques such as positron emission tomography (PET) as well as a number of therapeutic avenues targeting the key glycolytic steps within a cancer cell. A normal healthy cell’s survival relies on a sensitive balance between the primordial glycolysis and a more regulated mitochondrial bioenergetics. The salient difference between these two bioenergetics pathways is that oxygen availability is an obligatory requirement for mitochondrial pathway while glycolysis can function without oxygen. Early observations that some cancer cells up-regulate glycolytic activity even in the presence of oxygen (aerobic glycolysis) led to a hypothesis that such an altered cancer cell metabolism stems from inherent mitochondrial dysfunction. While a general validity of this hypothesis is still being debated, a number of recent research efforts have yielded clarity on the physiological origins of this aerobic glycolysis phenotype in cancer cells. Building on these recent studies, we present a generalized scheme of cancer cell metabolism and propose a novel hypothesis that might rationalize new avenues of cancer intervention. PMID:26457230

  12. Combined experimental and numerical kinetic characterization of NR vulcanized with sulphur, N terbutyl, 2 benzothiazylsulphenamide and N,N diphenyl guanidine

    NASA Astrophysics Data System (ADS)

    Milani, G.; Hanel, T.; Donetti, R.; Milani, F.

    2016-06-01

    The paper presents the final results of a comprehensive experimental and numerical analysis aimed at deeply investigating the behavior of Natural Rubber (NR) vulcanized with sulphur in presence of different accelerators during standard rheometer tests. NR in presence of sulphur and two different accelerators (DPG and TBBS) in various concentrations is investigated, changing the curing temperature in the range 150-180°C and obtaining rheometer curves with a step of 10°C. Sulphur-TBBS concentrations considered are 1-1, 1-3, 3-3 and 3-1, with DPG at 1-4 phr respectively. A total of 48 experimental rheometer curves is so obtained. To fit experimental data, the general reaction scheme proposed by Han and co-workers for vulcanized sulphur NR is re-adapted and suitably modified taking into account the single contributions of the different accelerators. Chain reactions initiated by the formation of macro-compounds responsible for the formation of the unmatured crosslinked polymer are accounted for. In presence of two accelerators, reactions are assumed to proceed in parallel, making the practically effective hypothesis that there is no interaction between the two accelerators. From the simplified kinetic scheme adopted, a closed form solution is found for the crosslink density, with the only limitation that the induction period is excluded from computations. For each experimented case on the same blend, reaction kinetic constants provided by the model are utilized to deduce their trend in the Arrhenius space, also outside the temperature range inspected. Rather close linearity is found in the majority of the cases. A comparative analysis is carefully conducted among the constants at the different concentrations of S, TBBS and DPG investigated, allowing a prediction of curing behavior at any vulcanization temperature and with concentrations not experimentally tested, without the need of addition costly experimentation.

  13. Combined experimental and numerical kinetic characterization of NR vulcanized with sulphur, N terbutyl, 2 benzothiazylsulphenamide and N,N diphenyl guanidine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milani, G., E-mail: gabriele.milani@polimi.it; Hanel, T.; Donetti, R.

    2016-06-08

    The paper presents the final results of a comprehensive experimental and numerical analysis aimed at deeply investigating the behavior of Natural Rubber (NR) vulcanized with sulphur in presence of different accelerators during standard rheometer tests. NR in presence of sulphur and two different accelerators (DPG and TBBS) in various concentrations is investigated, changing the curing temperature in the range 150-180°C and obtaining rheometer curves with a step of 10°C. Sulphur-TBBS concentrations considered are 1-1, 1-3, 3-3 and 3-1, with DPG at 1-4 phr respectively. A total of 48 experimental rheometer curves is so obtained. To fit experimental data, the generalmore » reaction scheme proposed by Han and co-workers for vulcanized sulphur NR is re-adapted and suitably modified taking into account the single contributions of the different accelerators. Chain reactions initiated by the formation of macro-compounds responsible for the formation of the unmatured crosslinked polymer are accounted for. In presence of two accelerators, reactions are assumed to proceed in parallel, making the practically effective hypothesis that there is no interaction between the two accelerators. From the simplified kinetic scheme adopted, a closed form solution is found for the crosslink density, with the only limitation that the induction period is excluded from computations. For each experimented case on the same blend, reaction kinetic constants provided by the model are utilized to deduce their trend in the Arrhenius space, also outside the temperature range inspected. Rather close linearity is found in the majority of the cases. A comparative analysis is carefully conducted among the constants at the different concentrations of S, TBBS and DPG investigated, allowing a prediction of curing behavior at any vulcanization temperature and with concentrations not experimentally tested, without the need of addition costly experimentation.« less

  14. TRANSGENIC MOUSE MODELS AND PARTICULATE MATTER (PM)

    EPA Science Inventory

    The hypothesis to be tested is that metal catalyzed oxidative stress can contribute to the biological effects of particulate matter. We acquired several transgenic mouse strains to test this hypothesis. Breeding of the mice was accomplished by Duke University. Particles employed ...

  15. Hypothesis Testing Using the Films of the Three Stooges

    ERIC Educational Resources Information Center

    Gardner, Robert; Davidson, Robert

    2010-01-01

    The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.

  16. Hybridisation is associated with increased fecundity and size in invasive taxa: meta-analytic support for the hybridisation-invasion hypothesis

    PubMed Central

    Hovick, Stephen M; Whitney, Kenneth D

    2014-01-01

    The hypothesis that interspecific hybridisation promotes invasiveness has received much recent attention, but tests of the hypothesis can suffer from important limitations. Here, we provide the first systematic review of studies experimentally testing the hybridisation-invasion (H-I) hypothesis in plants, animals and fungi. We identified 72 hybrid systems for which hybridisation has been putatively associated with invasiveness, weediness or range expansion. Within this group, 15 systems (comprising 34 studies) experimentally tested performance of hybrids vs. their parental species and met our other criteria. Both phylogenetic and non-phylogenetic meta-analyses demonstrated that wild hybrids were significantly more fecund and larger than their parental taxa, but did not differ in survival. Resynthesised hybrids (which typically represent earlier generations than do wild hybrids) did not consistently differ from parental species in fecundity, survival or size. Using meta-regression, we found that fecundity increased (but survival decreased) with generation in resynthesised hybrids, suggesting that natural selection can play an important role in shaping hybrid performance – and thus invasiveness – over time. We conclude that the available evidence supports the H-I hypothesis, with the caveat that our results are clearly driven by tests in plants, which are more numerous than tests in animals and fungi. PMID:25234578

  17. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    PubMed

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Incentive payments are not related to expected health gain in the pay for performance scheme for UK primary care: cross-sectional analysis

    PubMed Central

    2012-01-01

    Background The General Medical Services primary care contract for the United Kingdom financially rewards performance in 19 clinical areas, through the Quality and Outcomes Framework. Little is known about how best to determine the size of financial incentives in pay for performance schemes. Our aim was to test the hypothesis that performance indicators with larger population health benefits receive larger financial incentives. Methods We performed cross sectional analyses to quantify associations between the size of financial incentives and expected health gain in the 2004 and 2006 versions of the Quality and Outcomes Framework. We used non-parametric two-sided Spearman rank correlation tests. Health gain was measured in expected lives saved in one year and in quality adjusted life years. For each quality indicator in an average sized general practice we tested for associations first, between the marginal increase in payment and the health gain resulting from a one percent point improvement in performance and second, between total payment and the health gain at the performance threshold for maximum payment. Results Evidence for lives saved or quality adjusted life years gained was found for 28 indicators accounting for 41% of the total incentive payments. No statistically significant associations were found between the expected health gain and incentive gained from a marginal 1% increase in performance in either the 2004 or 2006 version of the Quality and Outcomes Framework. In addition no associations were found between the size of financial payment for achievement of an indicator and the expected health gain at the performance threshold for maximum payment measured in lives saved or quality adjusted life years. Conclusions In this subgroup of indicators the financial incentives were not aligned to maximise health gain. This disconnection between incentive and expected health gain risks supporting clinical activities that are only marginally effective, at the expense of more effective activities receiving lower incentives. When designing pay for performance programmes decisions about the size of the financial incentive attached to an indicator should be informed by information on the health gain to be expected from that indicator. PMID:22507660

  19. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  20. The Impact of Economic Factors and Acquisition Reforms on the Cost of Defense Weapon Systems

    DTIC Science & Technology

    2006-03-01

    test for homoskedasticity, the Breusch - Pagan test is employed. The null hypothesis of the Breusch - Pagan test is that the variance is equal to zero...made. Using the Breusch - Pagan test shown in Table 19 below, the prob>chi2 is greater than 05.=α , therefore we fail to reject the null hypothesis...overrunpercentfp100 Breusch - Pagan Test (Ho=Constant Variance) Estimated Results Variance Standard Deviation overrunpercent100

  1. PBT assessment under REACH: Screening for low aquatic bioaccumulation with QSAR classifications based on physicochemical properties to replace BCF in vivo testing on fish.

    PubMed

    Nendza, Monika; Kühne, Ralph; Lombardo, Anna; Strempel, Sebastian; Schüürmann, Gerrit

    2018-03-01

    Aquatic bioconcentration factors (BCFs) are critical in PBT (persistent, bioaccumulative, toxic) and risk assessment of chemicals. High costs and use of more than 100 fish per standard BCF study (OECD 305) call for alternative methods to replace as much in vivo testing as possible. The BCF waiving scheme is a screening tool combining QSAR classifications based on physicochemical properties related to the distribution (hydrophobicity, ionisation), persistence (biodegradability, hydrolysis), solubility and volatility (Henry's law constant) of substances in water bodies and aquatic biota to predict substances with low aquatic bioaccumulation (nonB, BCF<2000). The BCF waiving scheme was developed with a dataset of reliable BCFs for 998 compounds and externally validated with another 181 substances. It performs with 100% sensitivity (no false negatives), >50% efficacy (waiving potential), and complies with the OECD principles for valid QSARs. The chemical applicability domain of the BCF waiving scheme is given by the structures of the training set, with some compound classes explicitly excluded like organometallics, poly- and perfluorinated compounds, aromatic triphenylphosphates, surfactants. The prediction confidence of the BCF waiving scheme is based on applicability domain compliance, consensus modelling, and the structural similarity with known nonB and B/vB substances. Compounds classified as nonB by the BCF waiving scheme are candidates for waiving of BCF in vivo testing on fish due to low concern with regard to the B criterion. The BCF waiving scheme supports the 3Rs with a possible reduction of >50% of BCF in vivo testing on fish. If the target chemical is outside the applicability domain of the BCF waiving scheme or not classified as nonB, further assessments with in silico, in vitro or in vivo methods are necessary to either confirm or reject bioaccumulative behaviour. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A simple algorithm to improve the performance of the WENO scheme on non-uniform grids

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong

    2018-02-01

    This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.

  3. An Inferential Confidence Interval Method of Establishing Statistical Equivalence that Corrects Tryon's (2001) Reduction Factor

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2008-01-01

    Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…

  4. Effects of Item Exposure for Conventional Examinations in a Continuous Testing Environment.

    ERIC Educational Resources Information Center

    Hertz, Norman R.; Chinn, Roberta N.

    This study explored the effect of item exposure on two conventional examinations administered as computer-based tests. A principal hypothesis was that item exposure would have little or no effect on average difficulty of the items over the course of an administrative cycle. This hypothesis was tested by exploring conventional item statistics and…

  5. Directional and Non-directional Hypothesis Testing: A Survey of SIG Members, Journals, and Textbooks.

    ERIC Educational Resources Information Center

    McNeil, Keith

    The use of directional and nondirectional hypothesis testing was examined from the perspectives of textbooks, journal articles, and members of editorial boards. Three widely used statistical texts were reviewed in terms of how directional and nondirectional tests of significance were presented. Texts reviewed were written by: (1) D. E. Hinkle, W.…

  6. Development and feasibility testing of the Pediatric Emergency Discharge Interaction Coding Scheme.

    PubMed

    Curran, Janet A; Taylor, Alexandra; Chorney, Jill; Porter, Stephen; Murphy, Andrea; MacPhee, Shannon; Bishop, Andrea; Haworth, Rebecca

    2017-08-01

    Discharge communication is an important aspect of high-quality emergency care. This study addresses the gap in knowledge on how to describe discharge communication in a paediatric emergency department (ED). The objective of this feasibility study was to develop and test a coding scheme to characterize discharge communication between health-care providers (HCPs) and caregivers who visit the ED with their children. The Pediatric Emergency Discharge Interaction Coding Scheme (PEDICS) and coding manual were developed following a review of the literature and an iterative refinement process involving HCP observations, inter-rater assessments and team consensus. The coding scheme was pilot-tested through observations of HCPs across a range of shifts in one urban paediatric ED. Overall, 329 patient observations were carried out across 50 observational shifts. Inter-rater reliability was evaluated in 16% of the observations. The final version of the PEDICS contained 41 communication elements. Kappa scores were greater than .60 for the majority of communication elements. The most frequently observed communication elements were under the Introduction node and the least frequently observed were under the Social Concerns node. HCPs initiated the majority of the communication. Pediatric Emergency Discharge Interaction Coding Scheme addresses an important gap in the discharge communication literature. The tool is useful for mapping patterns of discharge communication between HCPs and caregivers. Results from our pilot test identified deficits in specific areas of discharge communication that could impact adherence to discharge instructions. The PEDICS would benefit from further testing with a different sample of HCPs. © 2017 The Authors. Health Expectations Published by John Wiley & Sons Ltd.

  7. The Feminization of School Hypothesis Called into Question among Junior and High School Students

    ERIC Educational Resources Information Center

    Verniers, Catherine; Martinot, Delphine; Dompnier, Benoît

    2016-01-01

    Background: The feminization of school hypothesis suggests that boys underachieve in school compared to girls because school rewards feminine characteristics that are at odds with boys' masculine features. Aims: The feminization of school hypothesis lacks empirical evidence. The aim of this study was to test this hypothesis by examining the extent…

  8. Supporting shared hypothesis testing in the biomedical domain.

    PubMed

    Agibetov, Asan; Jiménez-Ruiz, Ernesto; Ondrésik, Marta; Solimando, Alessandro; Banerjee, Imon; Guerrini, Giovanna; Catalano, Chiara E; Oliveira, Joaquim M; Patanè, Giuseppe; Reis, Rui L; Spagnuolo, Michela

    2018-02-08

    Pathogenesis of inflammatory diseases can be tracked by studying the causality relationships among the factors contributing to its development. We could, for instance, hypothesize on the connections of the pathogenesis outcomes to the observed conditions. And to prove such causal hypotheses we would need to have the full understanding of the causal relationships, and we would have to provide all the necessary evidences to support our claims. In practice, however, we might not possess all the background knowledge on the causality relationships, and we might be unable to collect all the evidence to prove our hypotheses. In this work we propose a methodology for the translation of biological knowledge on causality relationships of biological processes and their effects on conditions to a computational framework for hypothesis testing. The methodology consists of two main points: hypothesis graph construction from the formalization of the background knowledge on causality relationships, and confidence measurement in a causality hypothesis as a normalized weighted path computation in the hypothesis graph. In this framework, we can simulate collection of evidences and assess confidence in a causality hypothesis by measuring it proportionally to the amount of available knowledge and collected evidences. We evaluate our methodology on a hypothesis graph that represents both contributing factors which may cause cartilage degradation and the factors which might be caused by the cartilage degradation during osteoarthritis. Hypothesis graph construction has proven to be robust to the addition of potentially contradictory information on the simultaneously positive and negative effects. The obtained confidence measures for the specific causality hypotheses have been validated by our domain experts, and, correspond closely to their subjective assessments of confidences in investigated hypotheses. Overall, our methodology for a shared hypothesis testing framework exhibits important properties that researchers will find useful in literature review for their experimental studies, planning and prioritizing evidence collection acquisition procedures, and testing their hypotheses with different depths of knowledge on causal dependencies of biological processes and their effects on the observed conditions.

  9. Pension reforms in Hong Kong: using residual and collaborative strategies to deal with the government's financial responsibility in providing retirement protection.

    PubMed

    Yu, Sam Wai-Kam

    2008-01-01

    In 2000, the Hong Kong government introduced the first compulsory retirement saving scheme intended to protect the entire workforce, the Mandatory Provident Fund (MPF). Prior to the introduction of this scheme, the government's main measure for giving financial protection to retirees was the Comprehensive Social Security Assistance (CSSA) scheme, which is a noncontributory, means-tested financial assistance scheme. This paper studies the government's attempt to introduce the MPF on top of the CSSA scheme as a means to illustrate how governments might address their financial responsibilities in providing pension schemes by adopting both the residual strategy-centered reform approach and the collaborative strategy-centered reform approach. The former approach is concerned with developing noncontributory schemes using residual strategies, and the latter is concerned with developing contributory schemes using collaborative strategies. The paper shows the difficulties involved in carrying out these two reform approaches simultaneously.

  10. A Continuing Search for a Near-Perfect Numerical Flux Scheme. Part 1; [AUSM+

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1994-01-01

    While enjoying demonstrated improvement in accuracy, efficiency, and robustness over existing schemes, the Advection Upstream Splitting Scheme (AUSM) was found to have some deficiencies in extreme cases. This recent progress towards improving the AUSM while retaining its advantageous features is described. The new scheme, termed AUSM+, features: unification of velocity and Mach number splitting; exact capture of a single stationary shock; and improvement in accuracy. A general construction of the AUSM+ scheme is layed out and then focus is on the analysis of the a scheme and its mathematical properties, heretofore unreported. Monotonicity and positivity are proved, and a CFL-like condition is given for first and second order schemes and for generalized curvilinear co-ordinates. Finally, results of numerical tests on many problems are given to confirm the capability and improvements on a variety of problems including those failed by prominent schemes.

  11. The limits to pride: A test of the pro-anorexia hypothesis.

    PubMed

    Cornelius, Talea; Blanton, Hart

    2016-01-01

    Many social psychological models propose that positive self-conceptions promote self-esteem. An extreme version of this hypothesis is advanced in "pro-anorexia" communities: identifying with anorexia, in conjunction with disordered eating, can lead to higher self-esteem. The current study empirically tested this hypothesis. Results challenge the pro-anorexia hypothesis. Although those with higher levels of pro-anorexia identification trended towards higher self-esteem with increased disordered eating, this did not overcome the strong negative main effect of pro-anorexia identification. These data suggest a more effective strategy for promoting self-esteem is to encourage rejection of disordered eating and an anorexic identity.

  12. Does the Slow-Growth, High-Mortality Hypothesis Apply Below Ground?

    PubMed

    Hourston, James E; Bennett, Alison E; Johnson, Scott N; Gange, Alan C

    2016-01-01

    Belowground tri-trophic study systems present a challenging environment in which to study plant-herbivore-natural enemy interactions. For this reason, belowground examples are rarely available for testing general ecological theories. To redress this imbalance, we present, for the first time, data on a belowground tri-trophic system to test the slow growth, high mortality hypothesis. We investigated whether the differing performance of entomopathogenic nematodes (EPNs) in controlling the common pest black vine weevil Otiorhynchus sulcatus could be linked to differently resistant cultivars of the red raspberry Rubus idaeus. The O. sulcatus larvae recovered from R. idaeus plants showed significantly slower growth and higher mortality on the Glen Rosa cultivar, relative to the more commercially favored Glen Ample cultivar creating a convenient system for testing this hypothesis. Heterorhabditis megidis was found to be less effective at controlling O. sulcatus than Steinernema kraussei, but conformed to the hypothesis. However, S. kraussei maintained high levels of O. sulcatus mortality regardless of how larval growth was influenced by R. idaeus cultivar. We link this to direct effects that S. kraussei had on reducing O. sulcatus larval mass, indicating potential sub-lethal effects of S. kraussei, which the slow-growth, high-mortality hypothesis does not account for. Possible origins of these sub-lethal effects of EPN infection and how they may impact on a hypothesis designed and tested with aboveground predator and parasitoid systems are discussed.

  13. Impact of WRF model PBL schemes on air quality simulations over Catalonia, Spain.

    PubMed

    Banks, R F; Baldasano, J M

    2016-12-01

    Here we analyze the impact of four planetary boundary-layer (PBL) parametrization schemes from the Weather Research and Forecasting (WRF) numerical weather prediction model on simulations of meteorological variables and predicted pollutant concentrations from an air quality forecast system (AQFS). The current setup of the Spanish operational AQFS, CALIOPE, is composed of the WRF-ARW V3.5.1 meteorological model tied to the Yonsei University (YSU) PBL scheme, HERMES v2 emissions model, CMAQ V5.0.2 chemical transport model, and dust outputs from BSC-DREAM8bv2. We test the performance of the YSU scheme against the Assymetric Convective Model Version 2 (ACM2), Mellor-Yamada-Janjic (MYJ), and Bougeault-Lacarrère (BouLac) schemes. The one-day diagnostic case study is selected to represent the most frequent synoptic condition in the northeast Iberian Peninsula during spring 2015; regional recirculations. It is shown that the ACM2 PBL scheme performs well with daytime PBL height, as validated against estimates retrieved using a micro-pulse lidar system (mean bias=-0.11km). In turn, the BouLac scheme showed WRF-simulated air and dew point temperature closer to METAR surface meteorological observations. Results are more ambiguous when simulated pollutant concentrations from CMAQ are validated against network urban, suburban, and rural background stations. The ACM2 scheme showed the lowest mean bias (-0.96μgm -3 ) with respect to surface ozone at urban stations, while the YSU scheme performed best with simulated nitrogen dioxide (-6.48μgm -3 ). The poorest results were with simulated particulate matter, with similar results found with all schemes tested. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Health seeking behavior in karnataka: does micro-health insurance matter?

    PubMed

    Savitha, S; Kiran, Kb

    2013-10-01

    Health seeking behaviour in the event of illness is influenced by the availability of good health care facilities and health care financing mechanisms. Micro health insurance not only promotes formal health care utilization at private providers but also reduces the cost of care by providing the insurance coverage. This paper explores the impact of Sampoorna Suraksha Programme, a micro health insurance scheme on the health seeking behaviour of households during illness in Karnataka, India. The study was conducted in three randomly selected districts in Karnataka, India in the first half of the year 2011. The hypothesis was tested using binary logistic regression analysis on the data collected from randomly selected 1146 households consisting of 4961 individuals. Insured individuals were seeking care at private hospitals than public hospitals due to the reduction in financial barrier. Moreover, equity in health seeking behaviour among insured individuals was observed. Our finding does represent a desirable result for health policy makers and micro finance institutions to advocate for the inclusion of health insurance in their portfolio, at least from the HSB perspective.

  15. Synthetic circuit designs for earth terraformation.

    PubMed

    Solé, Ricard V; Montañez, Raúl; Duran-Nebreda, Salva

    2015-07-18

    Mounting evidence indicates that our planet might experience runaway effects associated to rising temperatures and ecosystem overexploitation, leading to catastrophic shifts on short time scales. Remediation scenarios capable of counterbalancing these effects involve geoengineering, sustainable practices and carbon sequestration, among others. None of these scenarios seems powerful enough to achieve the desired restoration of safe boundaries. We hypothesize that synthetic organisms with the appropriate engineering design could be used to safely prevent declines in some stressed ecosystems and help improving carbon sequestration. Such schemes would include engineering mutualistic dependencies preventing undesired evolutionary processes. We hypothesize that some particular design principles introduce unescapable constraints to the engineered organisms that act as effective firewalls. Testing this designed organisms can be achieved by using controlled bioreactor models, with single and heterogeneous populations, and accurate computational models including different scales (from genetic constructs and metabolic pathways to population dynamics). Our hypothesis heads towards a future anthropogenic action that should effectively act as Terraforming processes. It also implies a major challenge in the existing biosafety policies, since we suggest release of modified organisms as potentially necessary strategy for success.

  16. Many-Body Localization and Quantum Nonergodicity in a Model with a Single-Particle Mobility Edge.

    PubMed

    Li, Xiaopeng; Ganeshan, Sriram; Pixley, J H; Das Sarma, S

    2015-10-30

    We investigate many-body localization in the presence of a single-particle mobility edge. By considering an interacting deterministic model with an incommensurate potential in one dimension we find that the single-particle mobility edge in the noninteracting system leads to a many-body mobility edge in the corresponding interacting system for certain parameter regimes. Using exact diagonalization, we probe the mobility edge via energy resolved entanglement entropy (EE) and study the energy resolved applicability (or failure) of the eigenstate thermalization hypothesis (ETH). Our numerical results indicate that the transition separating area and volume law scaling of the EE does not coincide with the nonthermal to thermal transition. Consequently, there exists an extended nonergodic phase for an intermediate energy window where the many-body eigenstates violate the ETH while manifesting volume law EE scaling. We also establish that the model possesses an infinite temperature many-body localization transition despite the existence of a single-particle mobility edge. We propose a practical scheme to test our predictions in atomic optical lattice experiments which can directly probe the effects of the mobility edge.

  17. European consensus conference for external quality assessment in molecular pathology.

    PubMed

    van Krieken, J H; Siebers, A G; Normanno, N

    2013-08-01

    Molecular testing of tumor samples to guide treatment decisions is of increasing importance. Several drugs have been approved for treatment of molecularly defined subgroups of patients, and the number of agents requiring companion diagnostics for their prescription is expected to rapidly increase. The results of such testing directly influence the management of individual patients, with both false-negative and false-positive results being harmful for patients. In this respect, external quality assurance (EQA) programs are essential to guarantee optimal quality of testing. There are several EQA schemes available in Europe, but they vary in scope, size and execution. During a conference held in early 2012, medical oncologists, pathologists, geneticists, molecular biologists, EQA providers and representatives from pharmaceutical industries developed a guideline to harmonize the standards applied by EQA schemes in molecular pathology. The guideline comprises recommendations on the organization of an EQA scheme, defining the criteria for reference laboratories, requirements for EQA test samples and the number of samples that are needed for an EQA scheme. Furthermore, a scoring system is proposed and consequences of poor performance are formulated. Lastly, the contents of an EQA report, communication of the EQA results, EQA databases and participant manual are given.

  18. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    2002-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  19. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow, and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  20. A laboratory study of magnesium-tetrabenz-porphyrin - Lack of agreement with diffuse interstellar bands

    NASA Technical Reports Server (NTRS)

    Donn, B.; Khanna, R. K.

    1980-01-01

    The visible and infrared spectra and thermal behavior of the bis-pyridal-magnesium-tetrabenz-porphyrin molecule proposed as the carrier of the diffuse interstellar bands were measured. Of the six band coincidences reported by Johnson (1977), only one, 4430 A, occurs in these experiments. This coincidence requires a special environment, not likely to occur in interstellar space but the infrared spectrum does not support Johnson's vibrational scheme. These spectroscopic and thermal measurements contradict the hypothesis that this molecule causes the diffuse bands.

  1. Conditional equivalence testing: An alternative remedy for publication bias

    PubMed Central

    Gustafson, Paul

    2018-01-01

    We introduce a publication policy that incorporates “conditional equivalence testing” (CET), a two-stage testing scheme in which standard NHST is followed conditionally by testing for equivalence. The idea of CET is carefully considered as it has the potential to address recent concerns about reproducibility and the limited publication of null results. In this paper we detail the implementation of CET, investigate similarities with a Bayesian testing scheme, and outline the basis for how a scientific journal could proceed to reduce publication bias while remaining relevant. PMID:29652891

  2. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    NASA Astrophysics Data System (ADS)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  3. A critique of statistical hypothesis testing in clinical research

    PubMed Central

    Raha, Somik

    2011-01-01

    Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152

  4. Thumbs down: a molecular-morphogenetic approach to avian digit homology.

    PubMed

    Capek, Daniel; Metscher, Brian D; Müller, Gerd B

    2014-01-01

    Avian forelimb digit homology remains one of the standard themes in comparative biology and EvoDevo research. In order to resolve the apparent contradictions between embryological and paleontological evidence a variety of hypotheses have been presented in recent years. The proposals range from excluding birds from the dinosaur clade, to assignments of homology by different criteria, or even assuming a hexadactyl tetrapod limb ground state. At present two approaches prevail: the frame shift hypothesis and the pyramid reduction hypothesis. While the former postulates a homeotic shift of digit identities, the latter argues for a gradual bilateral reduction of phalanges and digits. Here we present a new model that integrates elements from both hypotheses with the existing experimental and fossil evidence. We start from the main feature common to both earlier concepts, the initiating ontogenetic event: reduction and loss of the anterior-most digit. It is proposed that a concerted mechanism of molecular regulation and developmental mechanics is capable of shifting the boundaries of hoxD expression in embryonic forelimb buds as well as changing the digit phenotypes. Based on a distinction between positional (topological) and compositional (phenotypic) homology criteria, we argue that the identity of the avian digits is II, III, IV, despite a partially altered phenotype. Finally, we introduce an alternative digit reduction scheme that reconciles the current fossil evidence with the presented molecular-morphogenetic model. Our approach identifies specific experiments that allow to test whether gene expression can be shifted and digit phenotypes can be altered by induced digit loss or digit gain. © 2013 Wiley Periodicals, Inc.

  5. An improved snow scheme for the ECMWF land surface model: Description and offline validation

    Treesearch

    Emanuel Dutra; Gianpaolo Balsamo; Pedro Viterbo; Pedro M. A. Miranda; Anton Beljaars; Christoph Schar; Kelly Elder

    2010-01-01

    A new snow scheme for the European Centre for Medium-Range Weather Forecasts (ECMWF) land surface model has been tested and validated. The scheme includes a new parameterization of snow density, incorporating a liquid water reservoir, and revised formulations for the subgrid snow cover fraction and snow albedo. Offline validation (covering a wide range of spatial and...

  6. Comparative Study on High-Order Positivity-preserving WENO Schemes

    NASA Technical Reports Server (NTRS)

    Kotov, Dmitry V.; Yee, Helen M.; Sjogreen, Bjorn Axel

    2013-01-01

    The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more di cult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium ow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case (LeVeque & Yee 1990) as well as for the case consisting of two species and one reaction (Wang et al. 2012). For further information concerning this issue see (LeVeque & Yee 1990; Griffiths et al. 1992; Lafon & Yee 1996; Yee et al. 2012). This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.

  7. On testing two major cumulus parameterization schemes using the CSU Regional Atmospheric Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.

    1993-10-01

    One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less

  8. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2013-09-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent, The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data-sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  9. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2014-01-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent. The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  10. Evaluation of effectiveness of wavelet based denoising schemes using ANN and SVM for bearing condition classification.

    PubMed

    Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  11. Robust inference from multiple test statistics via permutations: a better alternative to the single test statistic approach for randomized trials.

    PubMed

    Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie

    2013-01-01

    Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.

  12. External quality assurance of HER2 fluorescence in situ hybridisation testing: results of a UK NEQAS pilot scheme

    PubMed Central

    Bartlett, John M S; Ibrahim, Merdol; Jasani, Bharat; Morgan, John M; Ellis, Ian; Kay, Elaine; Magee, Hilary; Barnett, Sarah; Miller, Keith

    2007-01-01

    Background and Aims Trastuzumab provides clinical benefit for advanced and early breast cancer patients whose tumours over‐express or have gene amplification of the HER2 oncogene. The UK National External Quality Assessment Scheme (NEQAS) for immunohistochemical testing was established to assess and improve the quality of HER2 immunohistochemical testing. However, until recently, no provision was available for HER2 fluorescence in situ hybridisation (FISH) testing. A pilot scheme was set up to review the performance of FISH testing in clinical diagnostic laboratories. Methods FISH was performed in 6 reference and 31 participating laboratories using a cell line panel with known HER2 status. Results Using results from reference laboratories as a criterion for acceptable performance, 60% of all results returned by participants were appropriate and 78% either appropriate or acceptable. However, 22.4% of results returned were deemed inappropriate, including 13 cases (4.2%) where a misdiagnosis would have been made had these been clinical specimens. Conclusions The results of three consecutive runs show that both reference laboratories and a proportion of routine clinical diagnostic (about 25%) centres can consistently achieve acceptable quality control of HER2 testing. Data from a significant proportion of participating laboratories show that further steps are required, including those taken via review of performance under schemes such as NEQAS, to improve quality of HER2 testing by FISH in the “real world”. PMID:16963466

  13. Adaptive control schemes for improving dynamic performance of efficiency-optimized induction motor drives.

    PubMed

    Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P

    2015-07-01

    Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique

    NASA Astrophysics Data System (ADS)

    Nagashima, Hiroyuki; Harakawa, Tetsumi

    We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.

  15. A new family of high-order compact upwind difference schemes with good spectral resolution

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Yao, Zhaohui; He, Feng; Shen, M. Y.

    2007-12-01

    This paper presents a new family of high-order compact upwind difference schemes. Unknowns included in the proposed schemes are not only the values of the function but also those of its first and higher derivatives. Derivative terms in the schemes appear only on the upwind side of the stencil. One can calculate all the first derivatives exactly as one solves explicit schemes when the boundary conditions of the problem are non-periodic. When the proposed schemes are applied to periodic problems, only periodic bi-diagonal matrix inversions or periodic block-bi-diagonal matrix inversions are required. Resolution optimization is used to enhance the spectral representation of the first derivative, and this produces a scheme with the highest spectral accuracy among all known compact schemes. For non-periodic boundary conditions, boundary schemes constructed in virtue of the assistant scheme make the schemes not only possess stability for any selective length scale on every point in the computational domain but also satisfy the principle of optimal resolution. Also, an improved shock-capturing method is developed. Finally, both the effectiveness of the new hybrid method and the accuracy of the proposed schemes are verified by executing four benchmark test cases.

  16. A Minimal Three-Dimensional Tropical Cyclone Model.

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyan; Smith, Roger K.; Ulrich, Wolfgang

    2001-07-01

    A minimal 3D numerical model designed for basic studies of tropical cyclone behavior is described. The model is formulated in coordinates on an f or plane and has three vertical levels, one characterizing a shallow boundary layer and the other two representing the upper and lower troposphere, respectively. It has three options for treating cumulus convection on the subgrid scale and a simple scheme for the explicit release of latent heat on the grid scale. The subgrid-scale schemes are based on the mass-flux models suggested by Arakawa and Ooyama in the late 1960s, but modified to include the effects of precipitation-cooled downdrafts. They differ from one another in the closure that determines the cloud-base mass flux. One closure is based on the assumption of boundary layer quasi-equilibrium proposed by Raymond and Emanuel.It is shown that a realistic hurricane-like vortex develops from a moderate strength initial vortex, even when the initial environment is slightly stable to deep convection. This is true for all three cumulus schemes as well as in the case where only the explicit release of latent heat is included. In all cases there is a period of gestation during which the boundary layer moisture in the inner core region increases on account of surface moisture fluxes, followed by a period of rapid deepening. Precipitation from the convection scheme dominates the explicit precipitation in the early stages of development, but this situation is reversed as the vortex matures. These findings are similar to those of Baik et al., who used the Betts-Miller parameterization scheme in an axisymmetric model with 11 levels in the vertical. The most striking difference between the model results using different convection schemes is the length of the gestation period, whereas the maximum intensity attained is similar for the three schemes. The calculations suggest the hypothesis that the period of rapid development in tropical cyclones is accompanied by a change in the character of deep convection in the inner core region from buoyantly driven, predominantly upright convection to slantwise forced moist ascent.

  17. Phylogenetic classification of Aureobasidium pullulans strains for production of pullulan and xylanase

    USDA-ARS?s Scientific Manuscript database

    This study tests the hypothesis that phylogenetic classification can predict whether A. pullulans strains will produce useful levels of the commercial polysaccharide, pullulan, or the valuable enzyme, xylanase. To test this hypothesis, 19 strains of A. pullulans with previously described phenotypes...

  18. Exploration of a Dynamic Merging Scheme for Precipitation Estimation over a Small Urban Catchment

    NASA Astrophysics Data System (ADS)

    Al-Azerji, Sherien; Rico-Ramirez, Miguel, ,, Dr.; Han, Dawei, ,, Prof.

    2016-04-01

    The accuracy of quantitative precipitation estimation is of significant importance for urban areas due to the potentially damaging consequences that can result from pluvial flooding. Improved accuracy could be accomplished by merging rain gauge measurements with weather radar data through different merging methods. Several factors may affect the accuracy of the merged data, and the gauge density used for merging is one of the most important. However, if there are no gauges inside the research area, then a gauge network outside the research area can be used for the merging. Generally speaking, the denser the rain gauge network is, the better the merging results that can be achieved. However, in practice, the rain gauge network around the research area is fixed, and the research question is about the optimal merging area. The hypothesis is that if the merging area is too small, there are fewer gauges for merging and thus the result would be poor. If the merging area is too large, gauges far away from the research area can be included in merging. However, due to their large distances, those gauges far away from the research area provide little relevant information to the study and may even introduce noise in merging. Therefore, an optimal merging area that produces the best merged rainfall estimation in the research area could exist. To test this hypothesis, the distance from the centre of the research area and the number of merging gauges around the research area were gradually increased and merging with a new domain of radar data was then performed. The performance of the new merging scheme was compared with a gridded interpolated rainfall from four experimental rain gauges installed inside the research area for validation. The result of this analysis shows that there is indeed an optimum distance from the centre of research area and consequently an optimum number of rain gauges that produce the best merged rainfall data inside the research area. This study is of important and practical value for estimating rainfall in an urban catchment (when there are no gauges available inside the catchment) by merging weather radar with rain gauge data from outside of the catchment. This has not been reported in any literature before now.

  19. Formulating appropriate statistical hypotheses for treatment comparison in clinical trial design and analysis.

    PubMed

    Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming

    2014-11-01

    We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Testing two temporal upscaling schemes for the estimation of the time variability of the actual evapotranspiration

    NASA Astrophysics Data System (ADS)

    Maltese, A.; Capodici, F.; Ciraolo, G.; La Loggia, G.

    2015-10-01

    Temporal availability of grapes actual evapotranspiration is an emerging issue since vineyards farms are more and more converted from rainfed to irrigated agricultural systems. The manuscript aims to verify the accuracy of the actual evapotranspiration retrieval coupling a single source energy balance approach and two different temporal upscaling schemes. The first scheme tests the temporal upscaling of the main input variables, namely the NDVI, albedo and LST; the second scheme tests the temporal upscaling of the energy balance output, the actual evapotranspiration. The temporal upscaling schemes were implemented on: i) airborne remote sensing data acquired monthly during a whole irrigation season over a Sicilian vineyard; ii) low resolution MODIS products released daily or weekly; iii) meteorological data acquired by standard gauge stations. Daily MODIS LST products (MOD11A1) were disaggregated using the DisTrad model, 8-days black and white sky albedo products (MCD43A) allowed modeling the total albedo, and 8-days NDVI products (MOD13Q1) were modeled using the Fisher approach. Results were validated both in time and space. The temporal validation was carried out using the actual evapotranspiration measured in situ using data collected by a flux tower through the eddy covariance technique. The spatial validation involved airborne images acquired at different times from June to September 2008. Results aim to test whether the upscaling of the energy balance input or output data performed better.

  1. High Order Schemes in BATS-R-US: Is it OK to Simplify Them?

    NASA Astrophysics Data System (ADS)

    Tóth, G.; Chen, Y.; van der Holst, B.; Daldorff, L. K. S.

    2014-09-01

    We describe a number of high order schemes and their simplified variants that have been implemented into the University of Michigan global magnetohydrodynamics code BATS-R-US. We compare the various schemes with each other and the legacy 2nd order TVD scheme for various test problems and two space physics applications. We find that the simplified schemes are often quite competitive with the more complex and expensive full versions, despite the fact that the simplified versions are only high order accurate for linear systems of equations. We find that all the high order schemes require some fixes to ensure positivity in the space physics applications. On the other hand, they produce superior results as compared with the second order scheme and/or produce the same quality of solution at a much reduced computational cost.

  2. Error reduction program: A progress report

    NASA Technical Reports Server (NTRS)

    Syed, S. A.

    1984-01-01

    Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.

  3. The potential for increased power from combining P-values testing the same hypothesis.

    PubMed

    Ganju, Jitendra; Julie Ma, Guoguang

    2017-02-01

    The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.

  4. Research on the supercapacitor support schemes for LVRT of variable-frequency drive in the thermal power plant

    NASA Astrophysics Data System (ADS)

    Han, Qiguo; Zhu, Kai; Shi, Wenming; Wu, Kuayu; Chen, Kai

    2018-02-01

    In order to solve the problem of low voltage ride through(LVRT) of the major auxiliary equipment’s variable-frequency drive (VFD) in thermal power plant, the scheme of supercapacitor paralleled in the DC link of VFD is put forward, furthermore, two solutions of direct parallel support and voltage boost parallel support of supercapacitor are proposed. The capacitor values for the relevant motor loads are calculated according to the law of energy conservation, and they are verified by Matlab simulation. At last, a set of test prototype is set up, and the test results prove the feasibility of the proposed schemes.

  5. A test of the orthographic recoding hypothesis

    NASA Astrophysics Data System (ADS)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  6. The picture superiority effect in conceptual implicit memory: a conceptual distinctiveness hypothesis.

    PubMed

    Hamilton, Maryellen; Geraci, Lisa

    2006-01-01

    According to leading theories, the picture superiority effect is driven by conceptual processing, yet this effect has been difficult to obtain using conceptual implicit memory tests. We hypothesized that the picture superiority effect results from conceptual processing of a picture's distinctive features rather than a picture's semantic features. To test this hypothesis, we used 2 conceptual implicit general knowledge tests; one cued conceptually distinctive features (e.g., "What animal has large eyes?") and the other cued semantic features (e.g., "What animal is the figurehead of Tootsie Roll?"). Results showed a picture superiority effect only on the conceptual test using distinctive cues, supporting our hypothesis that this effect is mediated by conceptual processing of a picture's distinctive features.

  7. Hypothesis testing for band size detection of high-dimensional banded precision matrices.

    PubMed

    An, Baiguo; Guo, Jianhua; Liu, Yufeng

    2014-06-01

    Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.

  8. Why do mothers favor girls and fathers, boys? : A hypothesis and a test of investment disparity.

    PubMed

    Godoy, Ricardo; Reyes-García, Victoria; McDade, Thomas; Tanner, Susan; Leonard, William R; Huanca, Tomás; Vadez, Vincent; Patel, Karishma

    2006-06-01

    Growing evidence suggests mothers invest more in girls than boys and fathers more in boys than girls. We develop a hypothesis that predicts preference for girls by the parent facing more resource constraints and preference for boys by the parent facing less constraint. We test the hypothesis with panel data from the Tsimane', a foraging-farming society in the Bolivian Amazon. Tsimane' mothers face more resource constraints than fathers. As predicted, mother's wealth protected girl's BMI, but father's wealth had weak effects on boy's BMI. Numerous tests yielded robust results, including those that controlled for fixed effects of child and household.

  9. Addendum to the article: Misuse of null hypothesis significance testing: Would estimation of positive and negative predictive values improve certainty of chemical risk assessment?

    PubMed

    Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf

    2015-03-01

    We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.

  10. Functional imaging of brain responses to different outcomes of hypothesis testing: revealed in a category induction task.

    PubMed

    Li, Fuhong; Cao, Bihua; Luo, Yuejia; Lei, Yi; Li, Hong

    2013-02-01

    Functional magnetic resonance imaging (fMRI) was used to examine differences in brain activation that occur when a person receives the different outcomes of hypothesis testing (HT). Participants were provided with a series of images of batteries and were asked to learn a rule governing what kinds of batteries were charged. Within each trial, the first two charged batteries were sequentially displayed, and participants would generate a preliminary hypothesis based on the perceptual comparison. Next, a third battery that served to strengthen, reject, or was irrelevant to the preliminary hypothesis was displayed. The fMRI results revealed that (1) no significant differences in brain activation were found between the 2 hypothesis-maintain conditions (i.e., strengthen and irrelevant conditions); and (2) compared with the hypothesis-maintain conditions, the hypothesis-reject condition activated the left medial frontal cortex, bilateral putamen, left parietal cortex, and right cerebellum. These findings are discussed in terms of the neural correlates of the subcomponents of HT and working memory manipulation. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Comparison of the AUSM(+) and H-CUSP Schemes for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Liou, Meng-Sing

    2003-01-01

    Many turbomachinery CFD codes use second-order central-difference (C-D) schemes with artificial viscosity to control point decoupling and to capture shocks. While C-D schemes generally give accurate results, they can also exhibit minor numerical problems including overshoots at shocks and at the edges of viscous layers, and smearing of shocks and other flow features. In an effort to improve predictive capability for turbomachinery problems, two C-D codes developed by Chima, RVCQ3D and Swift, were modified by the addition of two upwind schemes: the AUSM+ scheme developed by Liou, et al., and the H-CUSP scheme developed by Tatsumi, et al. Details of the C-D scheme and the two upwind schemes are described, and results of three test cases are shown. Results for a 2-D transonic turbine vane showed that the upwind schemes eliminated viscous layer overshoots. Results for a 3-D turbine vane showed that the upwind schemes gave improved predictions of exit flow angles and losses, although the HCUSP scheme predicted slightly higher losses than the other schemes. Results for a 3-D supersonic compressor (NASA rotor 37) showed that the AUSM+ scheme predicted exit distributions of total pressure and temperature that are not generally captured by C-D codes. All schemes showed similar convergence rates, but the upwind schemes required considerably more CPU time per iteration.

  12. Animal Models for Testing the DOHaD Hypothesis

    EPA Science Inventory

    Since the seminal work in human populations by David Barker and colleagues, several species of animals have been used in the laboratory to test the Developmental Origins of Health and Disease (DOHaD) hypothesis. Rats, mice, guinea pigs, sheep, pigs and non-human primates have bee...

  13. A "Projective" Test of the Golden Section Hypothesis.

    ERIC Educational Resources Information Center

    Lee, Chris; Adams-Webber, Jack

    1987-01-01

    In a projective test of the golden section hypothesis, 24 high school students rated themselves and 10 comic strip characters on basis of 12 bipolar constructs. Overall proportion of cartoon figures which subjects assigned to positive poles of constructs was very close to golden section. (Author/NB)

  14. Pasture succession in the Neotropics: extending the nucleation hypothesis into a matrix discontinuity hypothesis.

    PubMed

    Peterson, Chris J; Dosch, Jerald J; Carson, Walter P

    2014-08-01

    The nucleation hypothesis appears to explain widespread patterns of succession in tropical pastures, specifically the tendency for isolated trees to promote woody species recruitment. Still, the nucleation hypothesis has usually been tested explicitly for only short durations and in some cases isolated trees fail to promote woody recruitment. Moreover, at times, nucleation occurs in other key habitat patches. Thus, we propose an extension, the matrix discontinuity hypothesis: woody colonization will occur in focal patches that function to mitigate the herbaceous vegetation effects, thus providing safe sites or regeneration niches. We tested predictions of the classical nucleation hypothesis, the matrix discontinuity hypothesis, and a distance from forest edge hypothesis, in five abandoned pastures in Costa Rica, across the first 11 years of succession. Our findings confirmed the matrix discontinuity hypothesis: specifically, rotting logs and steep slopes significantly enhanced woody colonization. Surprisingly, isolated trees did not consistently significantly enhance recruitment; only larger trees did so. Finally, woody recruitment consistently decreased with distance from forest. Our results as well as results from others suggest that the nucleation hypothesis needs to be broadened beyond its historical focus on isolated trees or patches; the matrix discontinuity hypothesis focuses attention on a suite of key patch types or microsites that promote woody species recruitment. We argue that any habitat discontinuities that ameliorate the inhibition by dense graminoid layers will be foci for recruitment. Such patches could easily be manipulated to speed the transition of pastures to closed canopy forests.

  15. Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.

    PubMed

    Herrmann, Esther; Call, Josep; Hernàndez-Lloreda, Maráa Victoria; Hare, Brian; Tomasello, Michael

    2007-09-07

    Humans have many cognitive skills not possessed by their nearest primate relatives. The cultural intelligence hypothesis argues that this is mainly due to a species-specific set of social-cognitive skills, emerging early in ontogeny, for participating and exchanging knowledge in cultural groups. We tested this hypothesis by giving a comprehensive battery of cognitive tests to large numbers of two of humans' closest primate relatives, chimpanzees and orangutans, as well as to 2.5-year-old human children before literacy and schooling. Supporting the cultural intelligence hypothesis and contradicting the hypothesis that humans simply have more "general intelligence," we found that the children and chimpanzees had very similar cognitive skills for dealing with the physical world but that the children had more sophisticated cognitive skills than either of the ape species for dealing with the social world.

  16. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaodong; Xia, Yidong; Luo, Hong

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  17. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE PAGES

    Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...

    2016-10-05

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  18. Quasideterministic generation of maximally entangled states of two mesoscopic atomic ensembles by adiabatic quantum feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Lisi, Antonio; De Siena, Silvio; Illuminati, Fabrizio

    2005-09-15

    We introduce an efficient, quasideterministic scheme to generate maximally entangled states of two atomic ensembles. The scheme is based on quantum nondemolition measurements of total atomic populations and on adiabatic quantum feedback conditioned by the measurements outputs. The high efficiency of the scheme is tested and confirmed numerically for ideal photodetection as well as in the presence of losses.

  19. A Search for Factors Causing Training Costs to Rise by Examining the U. S. Navy’s AT, AW, and AX Ratings during their First Enlistment Period

    DTIC Science & Technology

    1986-09-01

    HYPOTHESIS TEST .................... 20 III. TIME TO GET RATED TWO FACTOR ANOVA RESULTS ......... 23 IV. TIME TO GET RATED TUKEY’S PAIRED COvfl’PARISON... TEST RESULTS A ............................................ 24 V. TIME TO GET RATED TUKEY’S PAIRED COMPARISON TEST RESULTS B...25 VI. SINGLE FACTOR ANOVA HYPOTHESIS TEST #I............... 27 VII. AT: TIME TO GET RATED ANOVA TEST RESULTS ............. 30

  20. Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Huynh, H. T.

    1997-01-01

    A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.

  1. Finite-volume scheme for anisotropic diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Es, Bram van, E-mail: bramiozo@gmail.com; FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands"1; Koren, Barry

    In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.

  2. Sensory discrimination and intelligence: testing Spearman's other hypothesis.

    PubMed

    Deary, Ian J; Bell, P Joseph; Bell, Andrew J; Campbell, Mary L; Fazal, Nicola D

    2004-01-01

    At the centenary of Spearman's seminal 1904 article, his general intelligence hypothesis remains one of the most influential in psychology. Less well known is the article's other hypothesis that there is "a correspondence between what may provisionally be called 'General Discrimination' and 'General Intelligence' which works out with great approximation to one or absoluteness" (Spearman, 1904, p. 284). Studies that do not find high correlations between psychometric intelligence and single sensory discrimination tests do not falsify this hypothesis. This study is the first directly to address Spearman's general intelligence-general sensory discrimination hypothesis. It attempts to replicate his findings with a similar sample of schoolchildren. In a well-fitting structural equation model of the data, general intelligence and general discrimination correlated .92. In a reanalysis of data published byActon and Schroeder (2001), general intelligence and general sensory ability correlated .68 in men and women. One hundred years after its conception, Spearman's other hypothesis achieves some confirmation. The association between general intelligence and general sensory ability remains to be replicated and explained.

  3. Dynamic test input generation for multiple-fault isolation

    NASA Technical Reports Server (NTRS)

    Schaefer, Phil

    1990-01-01

    Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.

  4. A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.

    PubMed

    Do, Nhu Tri; An, Beongku

    2015-02-13

    In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.

  5. In silico experiment system for testing hypothesis on gene functions using three condition specific biological networks.

    PubMed

    Lee, Chai-Jin; Kang, Dongwon; Lee, Sangseon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun

    2018-05-25

    Determining functions of a gene requires time consuming, expensive biological experiments. Scientists can speed up this experimental process if the literature information and biological networks can be adequately provided. In this paper, we present a web-based information system that can perform in silico experiments of computationally testing hypothesis on the function of a gene. A hypothesis that is specified in English by the user is converted to genes using a literature and knowledge mining system called BEST. Condition-specific TF, miRNA and PPI (protein-protein interaction) networks are automatically generated by projecting gene and miRNA expression data to template networks. Then, an in silico experiment is to test how well the target genes are connected from the knockout gene through the condition-specific networks. The test result visualizes path from the knockout gene to the target genes in the three networks. Statistical and information-theoretic scores are provided on the resulting web page to help scientists either accept or reject the hypothesis being tested. Our web-based system was extensively tested using three data sets, such as E2f1, Lrrk2, and Dicer1 knockout data sets. We were able to re-produce gene functions reported in the original research papers. In addition, we comprehensively tested with all disease names in MalaCards as hypothesis to show the effectiveness of our system. Our in silico experiment system can be very useful in suggesting biological mechanisms which can be further tested in vivo or in vitro. http://biohealth.snu.ac.kr/software/insilico/. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Application of the implicit MacCormack scheme to the PNS equations

    NASA Technical Reports Server (NTRS)

    Lawrence, S. L.; Tannehill, J. C.; Chaussee, D. S.

    1983-01-01

    The two-dimensional parabolized Navier-Stokes equations are solved using MacCormack's (1981) implicit finite-difference scheme. It is shown that this method for solving the parabolized Navier-Stokes equations does not require the inversion of block tridiagonal systems of algebraic equations and allows the original explicit scheme to be employed in those regions where implicit treatment is not needed. The finite-difference algorithm is discussed and the computational results for two laminar test cases are presented. Results obtained using this method for the case of a flat plate boundary layer are compared with those obtained using the conventional Beam-Warming scheme, as well as those obtained from a boundary layer code. The computed results for a more severe test of the method, the hypersonic flow past a 15 deg compression corner, are found to compare favorably with experiment and a numerical solution of the complete Navier-Stokes equations.

  7. A bee in the corridor: centering and wall-following

    NASA Astrophysics Data System (ADS)

    Serres, Julien R.; Masson, Guillaume P.; Ruffier, Franck; Franceschini, Nicolas

    2008-12-01

    In an attempt to better understand the mechanism underlying lateral collision avoidance in flying insects, we trained honeybees ( Apis mellifera) to fly through a large (95-cm wide) flight tunnel. We found that, depending on the entrance and feeder positions, honeybees would either center along the corridor midline or fly along one wall. Bees kept following one wall even when a major (150-cm long) part of the opposite wall was removed. These findings cannot be accounted for by the “optic flow balance” hypothesis that has been put forward to explain the typical bees’ “centering response” observed in narrower corridors. Both centering and wall-following behaviors are well accounted for, however, by a control scheme called the lateral optic flow regulator, i.e., a feedback system that strives to maintain the unilateral optic flow constant. The power of this control scheme is that it would allow the bee to guide itself visually in a corridor without having to measure its speed or distance from the walls.

  8. Heterogeneous Coupling between Interdependent Lattices Promotes the Cooperation in the Prisoner’s Dilemma Game

    PubMed Central

    Xia, Cheng-Yi; Meng, Xiao-Kun; Wang, Zhen

    2015-01-01

    In the research realm of game theory, interdependent networks have extended the content of spatial reciprocity, which needs the suitable coupling between networks. However, thus far, the vast majority of existing works just assume that the coupling strength between networks is symmetric. This hypothesis, to some extent, seems inconsistent with the ubiquitous observation of heterogeneity. Here, we study how the heterogeneous coupling strength, which characterizes the interdependency of utility between corresponding players of both networks, affects the evolution of cooperation in the prisoner’s dilemma game with two types of coupling schemes (symmetric and asymmetric ones). Compared with the traditional case, we show that heterogeneous coupling greatly promotes the collective cooperation. The symmetric scheme seems much better than the asymmetric case. Moreover, the role of varying amplitude of coupling strength is also studied on these two interdependent ways. Current findings are helpful for us to understand the evolution of cooperation within many real-world systems, in particular for the interconnected and interrelated systems. PMID:26102082

  9. A simple and efficient shear-flexible plate bending element

    NASA Technical Reports Server (NTRS)

    Chaudhuri, Reaz A.

    1987-01-01

    A shear-flexible triangular element formulation, which utilizes an assumed quadratic displacement potential energy approach and is numerically integrated using Gauss quadrature, is presented. The Reissner/Mindlin hypothesis of constant cross-sectional warping is directly applied to the three-dimensional elasticity theory to obtain a moderately thick-plate theory or constant shear-angle theory (CST), wherein the middle surface is no longer considered to be the reference surface and the two rotations are replaced by the two in-plane displacements as nodal variables. The resulting finite-element possesses 18 degrees of freedom (DOF). Numerical results are obtained for two different numerical integration schemes and a wide range of meshes and span-to-thickness ratios. These, when compared with available exact, series or finite-element solutions, demonstrate accuracy and rapid convergence characteristics of the present element. This is especially true in the case of thin to very thin plates, when the present element, used in conjunction with the reduced integration scheme, outperforms its counterpart, based on discrete Kirchhoff constraint theory (DKT).

  10. Methodology of problem-based learning engineering and technology and of its implementation with modern computer resources

    NASA Astrophysics Data System (ADS)

    Lebedev, A. A.; Ivanova, E. G.; Komleva, V. A.; Klokov, N. M.; Komlev, A. A.

    2017-01-01

    The considered method of learning the basics of microelectronic circuits and systems amplifier enables one to understand electrical processes deeper, to understand the relationship between static and dynamic characteristics and, finally, bring the learning process to the cognitive process. The scheme of problem-based learning can be represented by the following sequence of procedures: the contradiction is perceived and revealed; the cognitive motivation is provided by creating a problematic situation (the mental state of the student), moving the desire to solve the problem, to raise the question "why?", the hypothesis is made; searches for solutions are implemented; the answer is looked for. Due to the complexity of architectural schemes in the work the modern methods of computer analysis and synthesis are considered in the work. Examples of engineering by students in the framework of students' scientific and research work of analog circuits with improved performance based on standard software and software developed at the Department of Microelectronics MEPhI.

  11. Heterogeneous Coupling between Interdependent Lattices Promotes the Cooperation in the Prisoner's Dilemma Game.

    PubMed

    Xia, Cheng-Yi; Meng, Xiao-Kun; Wang, Zhen

    2015-01-01

    In the research realm of game theory, interdependent networks have extended the content of spatial reciprocity, which needs the suitable coupling between networks. However, thus far, the vast majority of existing works just assume that the coupling strength between networks is symmetric. This hypothesis, to some extent, seems inconsistent with the ubiquitous observation of heterogeneity. Here, we study how the heterogeneous coupling strength, which characterizes the interdependency of utility between corresponding players of both networks, affects the evolution of cooperation in the prisoner's dilemma game with two types of coupling schemes (symmetric and asymmetric ones). Compared with the traditional case, we show that heterogeneous coupling greatly promotes the collective cooperation. The symmetric scheme seems much better than the asymmetric case. Moreover, the role of varying amplitude of coupling strength is also studied on these two interdependent ways. Current findings are helpful for us to understand the evolution of cooperation within many real-world systems, in particular for the interconnected and interrelated systems.

  12. Integrating disparate lidar data at the national scale to assess the relationships between height above ground, land cover and ecoregions

    USGS Publications Warehouse

    Stoker, Jason M.; Cochrane, Mark A.; Roy, David P.

    2013-01-01

    With the acquisition of lidar data for over 30 percent of the US, it is now possible to assess the three-dimensional distribution of features at the national scale. This paper integrates over 350 billion lidar points from 28 disparate datasets into a national-scale database and evaluates if height above ground is an important variable in the context of other nationalscale layers, such as the US Geological Survey National Land Cover Database and the US Environmental Protection Agency ecoregions maps. While the results were not homoscedastic and the available data did not allow for a complete height census in any of the classes, it does appear that where lidar data were used, there were detectable differences in heights among many of these national classification schemes. This study supports the hypothesis that there were real, detectable differences in heights in certain national-scale classification schemes, despite height not being a variable used in any of the classification routines.

  13. Killeen's (2005) "p[subscript rep]" Coefficient: Logical and Mathematical Problems

    ERIC Educational Resources Information Center

    Maraun, Michael; Gabriel, Stephanie

    2010-01-01

    In his article, "An Alternative to Null-Hypothesis Significance Tests," Killeen (2005) urged the discipline to abandon the practice of "p[subscript obs]"-based null hypothesis testing and to quantify the signal-to-noise characteristics of experimental outcomes with replication probabilities. He described the coefficient that he…

  14. Using VITA Service Learning Experiences to Teach Hypothesis Testing and P-Value Analysis

    ERIC Educational Resources Information Center

    Drougas, Anne; Harrington, Steve

    2011-01-01

    This paper describes a hypothesis testing project designed to capture student interest and stimulate classroom interaction and communication. Using an online survey instrument, the authors collected student demographic information and data regarding university service learning experiences. Introductory statistics students performed a series of…

  15. A Rational Analysis of the Selection Task as Optimal Data Selection.

    ERIC Educational Resources Information Center

    Oaksford, Mike; Chater, Nick

    1994-01-01

    Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)

  16. Random Effects Structure for Confirmatory Hypothesis Testing: Keep It Maximal

    ERIC Educational Resources Information Center

    Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.

    2013-01-01

    Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…

  17. The effects of rater bias and assessment method used to estimate disease severity on hypothesis testing

    USDA-ARS?s Scientific Manuscript database

    The effects of bias (over and underestimates) in estimates of disease severity on hypothesis testing using different assessment methods was explored. Nearest percent estimates (NPE), the Horsfall-Barratt (H-B) scale, and two different linear category scales (10% increments, with and without addition...

  18. A Multivariate Test of the Bott Hypothesis in an Urban Irish Setting

    ERIC Educational Resources Information Center

    Gordon, Michael; Downing, Helen

    1978-01-01

    Using a sample of 686 married Irish women in Cork City the Bott hypothesis was tested, and the results of a multivariate regression analysis revealed that neither network connectedness nor the strength of the respondent's emotional ties to the network had any explanatory power. (Author)

  19. Polarization, Definition, and Selective Media Learning.

    ERIC Educational Resources Information Center

    Tichenor, P. J.; And Others

    The traditional hypothesis that extreme attitudinal positions on controversial issues are likely to produce low understanding of messages on these issues--especially when the messages represent opposing views--is tested. Data for test of the hypothesis are from two field studies, each dealing with reader attitudes and decoding of one news article…

  20. The Lasting Effects of Introductory Economics Courses.

    ERIC Educational Resources Information Center

    Sanders, Philip

    1980-01-01

    Reports research which tests the Stigler Hypothesis. The hypothesis suggests that students who have taken introductory economics courses and those who have not show little difference in test performance five years after completing college. Results of the author's research illustrate that economics students do retain some knowledge of economics…

  1. Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis

    NASA Astrophysics Data System (ADS)

    Střelec, Luboš

    2011-09-01

    The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.

  2. Strategies for implementing genomic selection for feed efficiency in dairy cattle breeding schemes.

    PubMed

    Wallén, S E; Lillehammer, M; Meuwissen, T H E

    2017-08-01

    Alternative genomic selection and traditional BLUP breeding schemes were compared for the genetic improvement of feed efficiency in simulated Norwegian Red dairy cattle populations. The change in genetic gain over time and achievable selection accuracy were studied for milk yield and residual feed intake, as a measure of feed efficiency. When including feed efficiency in genomic BLUP schemes, it was possible to achieve high selection accuracies for genomic selection, and all genomic BLUP schemes gave better genetic gain for feed efficiency than BLUP using a pedigree relationship matrix. However, introducing a second trait in the breeding goal caused a reduction in the genetic gain for milk yield. When using contracted test herds with genotyped and feed efficiency recorded cows as a reference population, adding an additional 4,000 new heifers per year to the reference population gave accuracies that were comparable to a male reference population that used progeny testing with 250 daughters per sire. When the test herd consisted of 500 or 1,000 cows, lower genetic gain was found than using progeny test records to update the reference population. It was concluded that to improve difficult to record traits, the use of contracted test herds that had additional recording (e.g., measurements required to calculate feed efficiency) is a viable option, possibly through international collaborations. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Implementation of the high-order schemes QUICK and LECUSSO in the COMMIX-1C Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakai, K.; Sun, J.G.; Sha, W.T.

    Multidimensional analysis computer programs based on the finite volume method, such as COMMIX-1C, have been commonly used to simulate thermal-hydraulic phenomena in engineering systems such as nuclear reactors. In COMMIX-1C, the first-order schemes with respect to both space and time are used. In many situations such as flow recirculations and stratifications with steep gradient of velocity and temperature fields, however, high-order difference schemes are necessary for an accurate prediction of the fields. For these reasons, two second-order finite difference numerical schemes, QUICK (Quadratic Upstream Interpolation for Convective Kinematics) and LECUSSO (Local Exact Consistent Upwind Scheme of Second Order), have beenmore » implemented in the COMMIX-1C computer code. The formulations were derived for general three-dimensional flows with nonuniform grid sizes. Numerical oscillation analyses for QUICK and LECUSSO were performed. To damp the unphysical oscillations which occur in calculations with high-order schemes at high mesh Reynolds numbers, a new FRAM (Filtering Remedy and Methodology) scheme was developed and implemented. To be consistent with the high-order schemes, the pressure equation and the boundary conditions for all the conservation equations were also modified to be of second order. The new capabilities in the code are listed. Test calculations were performed to validate the implementation of the high-order schemes. They include the test of the one-dimensional nonlinear Burgers equation, two-dimensional scalar transport in two impinging streams, von Karmann vortex shedding, shear driven cavity flow, Couette flow, and circular pipe flow. The calculated results were compared with available data; the agreement is good.« less

  4. Concerns regarding a call for pluralism of information theory and hypothesis testing

    USGS Publications Warehouse

    Lukacs, P.M.; Thompson, W.L.; Kendall, W.L.; Gould, W.R.; Doherty, P.F.; Burnham, K.P.; Anderson, D.R.

    2007-01-01

    1. Stephens et al . (2005) argue for `pluralism? in statistical analysis, combining null hypothesis testing and information-theoretic (I-T) methods. We show that I-T methods are more informative even in single variable problems and we provide an ecological example. 2. I-T methods allow inferences to be made from multiple models simultaneously. We believe multimodel inference is the future of data analysis, which cannot be achieved with null hypothesis-testing approaches. 3. We argue for a stronger emphasis on critical thinking in science in general and less reliance on exploratory data analysis and data dredging. Deriving alternative hypotheses is central to science; deriving a single interesting science hypothesis and then comparing it to a default null hypothesis (e.g. `no difference?) is not an efficient strategy for gaining knowledge. We think this single-hypothesis strategy has been relied upon too often in the past. 4. We clarify misconceptions presented by Stephens et al . (2005). 5. We think inference should be made about models, directly linked to scientific hypotheses, and their parameters conditioned on data, Prob(Hj| data). I-T methods provide a basis for this inference. Null hypothesis testing merely provides a probability statement about the data conditioned on a null model, Prob(data |H0). 6. Synthesis and applications. I-T methods provide a more informative approach to inference. I-T methods provide a direct measure of evidence for or against hypotheses and a means to consider simultaneously multiple hypotheses as a basis for rigorous inference. Progress in our science can be accelerated if modern methods can be used intelligently; this includes various I-T and Bayesian methods.

  5. Testing quantum mechanics against macroscopic realism using the output of {chi}{sup (2)} nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podoshvedov, Sergey A.; Kim, Jaewan

    2006-09-15

    We suggest an all-optical scheme to generate entangled superposition of a single photon with macroscopic entangled states for testing macroscopic realism. The scheme consists of source of single photons, a Mach-Zehnder interferometer in routes of which a system of coupled-down converters with type-I phase matching is inserted, and a beam splitter for the other auxiliary modes of the scheme. We use quantization of the pumping modes, depletion of the coherent states passing through the system, and interference effect in the pumping modes in the process of erasing which-path information of the single-photon on exit from the Mach-Zehnder interferometer. We showmore » the macroscopic fields of the output superposition are distinguishable states. This scheme generates macroscopic entangled state that violates Bell's inequality. Moreover, the detailed analysis concerning change of amplitudes of entangled superposition by means of repeating this process many times is accomplished. We show our scheme works without photon number resolving detection and it is robust to detector inefficiency.« less

  6. Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations

    PubMed Central

    Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul

    2015-01-01

    The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067

  7. Single-cone finite-difference schemes for the (2+1)-dimensional Dirac equation in general electromagnetic textures

    NASA Astrophysics Data System (ADS)

    Pötz, Walter

    2017-11-01

    A single-cone finite-difference lattice scheme is developed for the (2+1)-dimensional Dirac equation in presence of general electromagnetic textures. The latter is represented on a (2+1)-dimensional staggered grid using a second-order-accurate finite difference scheme. A Peierls-Schwinger substitution to the wave function is used to introduce the electromagnetic (vector) potential into the Dirac equation. Thereby, the single-cone energy dispersion and gauge invariance are carried over from the continuum to the lattice formulation. Conservation laws and stability properties of the formal scheme are identified by comparison with the scheme for zero vector potential. The placement of magnetization terms is inferred from consistency with the one for the vector potential. Based on this formal scheme, several numerical schemes are proposed and tested. Elementary examples for single-fermion transport in the presence of in-plane magnetization are given, using material parameters typical for topological insulator surfaces.

  8. An Energy Efficient Cooperative Hierarchical MIMO Clustering Scheme for Wireless Sensor Networks

    PubMed Central

    Nasim, Mehwish; Qaisar, Saad; Lee, Sungyoung

    2012-01-01

    In this work, we present an energy efficient hierarchical cooperative clustering scheme for wireless sensor networks. Communication cost is a crucial factor in depleting the energy of sensor nodes. In the proposed scheme, nodes cooperate to form clusters at each level of network hierarchy ensuring maximal coverage and minimal energy expenditure with relatively uniform distribution of load within the network. Performance is enhanced by cooperative multiple-input multiple-output (MIMO) communication ensuring energy efficiency for WSN deployments over large geographical areas. We test our scheme using TOSSIM and compare the proposed scheme with cooperative multiple-input multiple-output (CMIMO) clustering scheme and traditional multihop Single-Input-Single-Output (SISO) routing approach. Performance is evaluated on the basis of number of clusters, number of hops, energy consumption and network lifetime. Experimental results show significant energy conservation and increase in network lifetime as compared to existing schemes. PMID:22368459

  9. A New Built-in Self Test Scheme for Phase-Locked Loops Using Internal Digital Signals

    NASA Astrophysics Data System (ADS)

    Kim, Youbean; Kim, Kicheol; Kim, Incheol; Kang, Sungho

    Testing PLLs (phase-locked loops) is becoming an important issue that affects both time-to-market and production cost of electronic systems. Though a PLL is the most common mixed-signal building block, it is very difficult to test due to internal analog blocks and signals. In this paper, we propose a new PLL BIST (built-in self test) using the distorted frequency detector that uses only internal digital signals. The proposed BIST does not need to load any analog nodes of the PLL. Therefore, it provides an efficient defect-oriented structural test scheme, reduced area overhead, and improved test quality compared with previous approaches.

  10. High-Resolution NU-WRF Simulations of a Deep Convective-Precipitation System During MC3E. Part 1; Comparisons Between Goddard Microphysics Schemes and Observations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Wu, Di; Lang, Stephen; Chern, Jiundar; Peters-Lidard, Christa; Fridlind, Ann; Matsui, Toshihisa

    2015-01-01

    The Goddard microphysics scheme was recently improved by adding a 4th ice class (frozen dropshail). This new 4ICE scheme was implemented and tested in the Goddard Cumulus Ensemble model (GCE) for an intense continental squall line and a moderate,less-organized continental case. Simulated peak radar reflectivity profiles were improved both in intensity and shape for both cases as were the overall reflectivity probability distributions versus observations. In this study, the new Goddard 4ICE scheme is implemented into the regional-scale NASA Unified - Weather Research and Forecasting model (NU-WRF) and tested on an intense mesoscale convective system that occurred during the Midlatitude Continental Convective Clouds Experiment (MC3E). The NU42WRF simulated radar reflectivities, rainfall intensities, and vertical and horizontal structure using the new 4ICE scheme agree as well as or significantly better with observations than when using previous versions of the Goddard 3ICE (graupel or hail) schemes. In the 4ICE scheme, the bin microphysics-based rain evaporation correction produces more erect convective cores, while modification of the unrealistic collection of ice by dry hail produces narrow and intense cores, allowing more slow-falling snow to be transported rearward. Together with a revised snow size mapping, the 4ICE scheme produces a more horizontally stratified trailing stratiform region with a broad, more coherent light rain area. In addition, the NU-WRF 4ICE simulated radar reflectivity distributions are consistent with and generally superior to those using the GCE due to the less restrictive open lateral boundaries

  11. WENO schemes on arbitrary mixed-element unstructured meshes in three space dimensions

    NASA Astrophysics Data System (ADS)

    Tsoutsanis, P.; Titarev, V. A.; Drikakis, D.

    2011-02-01

    The paper extends weighted essentially non-oscillatory (WENO) methods to three dimensional mixed-element unstructured meshes, comprising tetrahedral, hexahedral, prismatic and pyramidal elements. Numerical results illustrate the convergence rates and non-oscillatory properties of the schemes for various smooth and discontinuous solutions test cases and the compressible Euler equations on various types of grids. Schemes of up to fifth order of spatial accuracy are considered.

  12. On the Total Variation of High-Order Semi-Discrete Central Schemes for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron

    2004-01-01

    We discuss a new fifth-order, semi-discrete, central-upwind scheme for solving one-dimensional systems of conservation laws. This scheme combines a fifth-order WENO reconstruction, a semi-discrete central-upwind numerical flux, and a strong stability preserving Runge-Kutta method. We test our method with various examples, and give particular attention to the evolution of the total variation of the approximations.

  13. AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perego, A.; Cabezón, R. M.; Käppeli, R., E-mail: albino.perego@physik.tu-darmstadt.de

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmannmore » transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.« less

  14. Benchmarking and the laboratory

    PubMed Central

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  15. Testing the status-legitimacy hypothesis: A multilevel modeling approach to the perception of legitimacy in income distribution in 36 nations.

    PubMed

    Caricati, Luca

    2017-01-01

    The status-legitimacy hypothesis was tested by analyzing cross-national data about social inequality. Several indicators were used as indexes of social advantage: social class, personal income, and self-position in the social hierarchy. Moreover, inequality and freedom in nations, as indexed by Gini and by the human freedom index, were considered. Results from 36 nations worldwide showed no support for the status-legitimacy hypothesis. The perception that income distribution was fair tended to increase as social advantage increased. Moreover, national context increased the difference between advantaged and disadvantaged people in the perception of social fairness: Contrary to the status-legitimacy hypothesis, disadvantaged people were more likely than advantaged people to perceive income distribution as too large, and this difference increased in nations with greater freedom and equality. The implications for the status-legitimacy hypothesis are discussed.

  16. Tests of the Giant Impact Hypothesis

    NASA Technical Reports Server (NTRS)

    Jones, J. H.

    1998-01-01

    The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.

  17. Genetics and recent human evolution.

    PubMed

    Templeton, Alan R

    2007-07-01

    Starting with "mitochondrial Eve" in 1987, genetics has played an increasingly important role in studies of the last two million years of human evolution. It initially appeared that genetic data resolved the basic models of recent human evolution in favor of the "out-of-Africa replacement" hypothesis in which anatomically modern humans evolved in Africa about 150,000 years ago, started to spread throughout the world about 100,000 years ago, and subsequently drove to complete genetic extinction (replacement) all other human populations in Eurasia. Unfortunately, many of the genetic studies on recent human evolution have suffered from scientific flaws, including misrepresenting the models of recent human evolution, focusing upon hypothesis compatibility rather than hypothesis testing, committing the ecological fallacy, and failing to consider a broader array of alternative hypotheses. Once these flaws are corrected, there is actually little genetic support for the out-of-Africa replacement hypothesis. Indeed, when genetic data are used in a hypothesis-testing framework, the out-of-Africa replacement hypothesis is strongly rejected. The model of recent human evolution that emerges from a statistical hypothesis-testing framework does not correspond to any of the traditional models of human evolution, but it is compatible with fossil and archaeological data. These studies also reveal that any one gene or DNA region captures only a small part of human evolutionary history, so multilocus studies are essential. As more and more loci became available, genetics will undoubtedly offer additional insights and resolutions of human evolution.

  18. Composite scheme using localized relaxation with non-standard finite difference method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Kumar, Vivek; Raghurama Rao, S. V.

    2008-04-01

    Non-standard finite difference methods (NSFDM) introduced by Mickens [ Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers-Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791-797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250-2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235-276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter ( λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.

  19. A survey of nested grid techniques and their potential for use within the MASS weather prediction model

    NASA Technical Reports Server (NTRS)

    Koch, Steven E.; Mcqueen, Jeffery T.

    1987-01-01

    A survey of various one- and two-way interactive nested grid techniques used in hydrostatic numerical weather prediction models is presented and the advantages and disadvantages of each method are discussed. The techniques for specifying the lateral boundary conditions for each nested grid scheme are described in detail. Averaging and interpolation techniques used when applying the coarse mesh grid (CMG) and fine mesh grid (FMG) interface conditions during two-way nesting are discussed separately. The survey shows that errors are commonly generated at the boundary between the CMG and FMG due to boundary formulation or specification discrepancies. Methods used to control this noise include application of smoothers, enhanced diffusion, or damping-type time integration schemes to model variables. The results from this survey provide the information needed to decide which one-way and two-way nested grid schemes merit future testing with the Mesoscale Atmospheric Simulation System (MASS) model. An analytically specified baroclinic wave will be used to conduct systematic tests of the chosen schemes since this will allow for objective determination of the interfacial noise in the kind of meteorological setting for which MASS is designed. Sample diagnostic plots from initial tests using the analytic wave are presented to illustrate how the model-generated noise is ascertained. These plots will be used to compare the accuracy of the various nesting schemes when incorporated into the MASS model.

  20. Age Dedifferentiation Hypothesis: Evidence form the WAIS III.

    ERIC Educational Resources Information Center

    Juan-Espinosa, Manuel; Garcia, Luis F.; Escorial, Sergio; Rebollo, Irene; Colom, Roberto; Abad, Francisco J.

    2002-01-01

    Used the Spanish standardization of the Wechsler Adult Intelligence Scale III (WAIS III) (n=1,369) to test the age dedifferentiation hypothesis. Results show no changes in the percentage of variance accounted for by "g" and four group factors when restriction of range is controlled. Discusses an age indifferentation hypothesis. (SLD)

  1. Development and evaluation of a multi-locus sequence typing scheme for Mycoplasma synoviae.

    PubMed

    Dijkman, R; Feberwee, A; Landman, W J M

    2016-08-01

    Reproducible molecular Mycoplasma synoviae typing techniques with sufficient discriminatory power may help to expand knowledge on its epidemiology and contribute to the improvement of control and eradication programmes of this mycoplasma species. The present study describes the development and validation of a novel multi-locus sequence typing (MLST) scheme for M. synoviae. Thirteen M. synoviae isolates originating from different poultry categories, farms and lesions, were subjected to whole genome sequencing. Their sequences were compared to that of M. synoviae reference strain MS53. A high number of single nucleotide polymorphisms (SNPs) indicating considerable genetic diversity were identified. SNPs were present in over 40 putative target genes for MLST of which five target genes were selected (nanA, uvrA, lepA, ruvB and ugpA) for the MLST scheme. This scheme was evaluated analysing 209 M. synoviae samples from different countries, categories of poultry, farms and lesions. Eleven clonal clusters and 76 different sequence types (STs) were obtained. Clustering occurred following geographical origin, supporting the hypothesis of regional population evolution. M. synoviae samples obtained from epidemiologically linked outbreaks often harboured the same ST. In contrast, multiple M. synoviae lineages were found in samples originating from swollen joints or oviducts from hens that produce eggs with eggshell apex abnormalities indicating that further research is needed to identify the genetic factors of M. synoviae that may explain its variations in tissue tropism and disease inducing potential. Furthermore, MLST proved to have a higher discriminatory power compared to variable lipoprotein and haemagglutinin A typing, which generated 50 different genotypes on the same database.

  2. Hypothesis tests for the detection of constant speed radiation moving sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less

  3. Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics

    PubMed Central

    Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven

    2011-01-01

    Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957

  4. Evaluation of temperature history of a spherical nanosystem irradiated with various short-pulse laser sources

    NASA Astrophysics Data System (ADS)

    Lahiri, Arnab; Mondal, Pranab K.

    2018-04-01

    Spatiotemporal thermal response and characteristics of net entropy production rate of a gold nanosphere (radius: 50-200 nm), subjected to a short-pulse, femtosecond laser is reported. In order to correctly illustrate the temperature history of laser-metal interaction(s) at picoseconds transient with a comprehensive single temperature definition in macroscale and to further understand how the thermophysical response of the single-phase lag (SPL) and dual-phase lag (DPL) frameworks (with various lag-ratios') differs, governing energy equations derived from these benchmark non-Fourier frameworks are numerically solved and thermodynamic assessment under both the classical irreversible thermodynamics (CIT) as well as extended irreversible thermodynamics (EIT) frameworks is subsequently carried out. Under the frameworks of SPL and DPL with small lag ratio, thermophysical anomalies such as temperature overshooting characterized by adverse temperature gradient is observed to violate the local thermodynamic equilibrium (LTE) hypothesis. The EIT framework, however, justifies the compatibility of overshooting of temperature with the second law of thermodynamics under a nonequilibrium paradigm. The DPL framework with higher lag ratio was however observed to remain free from temperature overshooting and finds suitable consistency with LTE hypothesis. In order to solve the dimensional non-Fourier governing energy equation with volumetric laser-irradiation source term(s), the lattice Boltzmann method (LBM) is extended and a three-time level, fully implicit, second order accurate finite difference method (FDM) is illustrated. For all situations under observation, the LBM scheme is featured to be computationally superior to remaining FDM schemes. With detailed prediction of maximum temperature rise and the corresponding peaking time by all the numerical schemes, effects of the change of radius of the gold nanosphere, the magnitude of fluence of laser, and laser irradiation with multiple pulses on thermal energy transport and lagging behavior (if any) are further elucidated at different radial locations of the gold nanosphere. Last, efforts are further made to address the thermophysical characteristics when effective thermal conductivity (with temporal and size effects) is considered instead of the usual bulk thermal conductivity.

  5. Measurement and analysis of chatter in a compliant model of a drillstring equipped with a PDC bit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elsayed, M.A.; Raymond, D.W.

    1999-11-09

    Typical laboratory testing of Polycrystalline Diamond Compact (PDC) bits is performed on relatively rigid setups. Even in hard rock, PDC bits exhibit reasonable life using such testing schemes. Unfortunately, field experience indicates otherwise. In this paper, the authors show that introducing compliance in testing setups provides better simulation of actual field conditions. Using such a scheme, they show that chatter can be severe even in softer rock, such as sandstone, and very destructive to the cutters in hard rock, such as sierra white granite.

  6. Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map

    PubMed Central

    2014-01-01

    We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970

  7. A default Bayesian hypothesis test for mediation.

    PubMed

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  8. Knowledge Base Refinement as Improving an Incorrect and Incomplete Domain Theory

    DTIC Science & Technology

    1990-04-01

    Ginsberg et al., 1985), and RL (Fu and Buchanan, 1985), which perform empirical induction over a library of test cases. This chapter describes a new...state knowledge. Examples of high-level goals are: to test a hypothesis, to differentiate between several plausible hypotheses, to ask a clarifying...one tuple when we Group Hypotheses Test Hypothesis Applyrule Findout Strategy Metarule Strategy Metarule Strategy Metarule Strategy Metarule goal(group

  9. Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems

    NASA Technical Reports Server (NTRS)

    Morris, Chris

    2013-01-01

    Five different central difference schemes, based on a conservative differencing form of the Kennedy and Gruber skew-symmetric scheme, were compared with six different upwind schemes based on primitive variable reconstruction and the Roe flux. These eleven schemes were tested on a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem and a turbulent channel flow problem. The central schemes were generally very accurate and stable, provided the grid stretching rate was kept below 10%. As near-DNS grid resolutions, the results were comparable to reference DNS calculations. At coarser grid resolutions, the need for an LES SGS model became apparent. There was a noticeable improvement moving from CD-2 to CD-4, and higher-order schemes appear to yield clear benefits on coarser grids. The UB-7 and CU-5 upwind schemes also performed very well at near-DNS grid resolutions. The UB-5 upwind scheme does not do as well, but does appear to be suitable for well-resolved DNS. The UF-2 and UB-3 upwind schemes, which have significant dissipation over a wide spectral range, appear to be poorly suited for DNS or LES.

  10. A robust hypothesis test for the sensitive detection of constant speed radiation moving sources

    NASA Astrophysics Data System (ADS)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence

    2015-09-01

    Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.

  11. The [Geo]Scientific Method; Hypothesis Testing and Geoscience Proposal Writing for Students

    ERIC Educational Resources Information Center

    Markley, Michelle J.

    2010-01-01

    Most undergraduate-level geoscience texts offer a paltry introduction to the nuanced approach to hypothesis testing that geoscientists use when conducting research and writing proposals. Fortunately, there are a handful of excellent papers that are accessible to geoscience undergraduates. Two historical papers by the eminent American geologists G.…

  12. Mental Abilities and School Achievement: A Test of a Mediation Hypothesis

    ERIC Educational Resources Information Center

    Vock, Miriam; Preckel, Franzis; Holling, Heinz

    2011-01-01

    This study analyzes the interplay of four cognitive abilities--reasoning, divergent thinking, mental speed, and short-term memory--and their impact on academic achievement in school in a sample of adolescents in grades seven to 10 (N = 1135). Based on information processing approaches to intelligence, we tested a mediation hypothesis, which states…

  13. The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.

    ERIC Educational Resources Information Center

    Luster, Tom; Rhoades, Kelly

    To investigate how values influence parenting beliefs and practices, a test was made of Kohn's hypothesis that parents valuing self-direction emphasize the supportive function of parenting, while parents valuing conformity emphasize control of unsanctioned behaviors. Participating in the study were 65 mother-infant dyads. Infants ranged in age…

  14. Chromosome Connections: Compelling Clues to Common Ancestry

    ERIC Educational Resources Information Center

    Flammer, Larry

    2013-01-01

    Students compare banding patterns on hominid chromosomes and see striking evidence of their common ancestry. To test this, human chromosome no. 2 is matched with two shorter chimpanzee chromosomes, leading to the hypothesis that human chromosome 2 resulted from the fusion of the two shorter chromosomes. Students test that hypothesis by looking for…

  15. POTENTIAL FOR INVASION OF UNDERGROUND SOURCES OF DRINKING WATER THROUGH MUD-PLUGGED WELLS: AN EXPERIMENTAL APPRAISAL

    EPA Science Inventory

    The main objective of the feasibility study described here was to test the hypothesis that properly plugged wells are effectively sealed by drilling mud. In The process of testing the hypothesis, evidence about dynamics of building mud cake on the wellbore-face was obtained, as ...

  16. A test of the predator satiation hypothesis, acorn predator size, and acorn preference

    Treesearch

    C.H. Greenberg; S.J. Zarnoch

    2018-01-01

    Mast seeding is hypothesized to satiate seed predators with heavy production and reduce populations with crop failure, thereby increasing seed survival. Preference for red or white oak acorns could influence recruitment among oak species. We tested the predator satiation hypothesis, acorn preference, and predator size by concurrently...

  17. The Need for Nuance in the Null Hypothesis Significance Testing Debate

    ERIC Educational Resources Information Center

    Häggström, Olle

    2017-01-01

    Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…

  18. Acorn Caching in Tree Squirrels: Teaching Hypothesis Testing in the Park

    ERIC Educational Resources Information Center

    McEuen, Amy B.; Steele, Michael A.

    2012-01-01

    We developed an exercise for a university-level ecology class that teaches hypothesis testing by examining acorn preferences and caching behavior of tree squirrels (Sciurus spp.). This exercise is easily modified to teach concepts of behavioral ecology for earlier grades, particularly high school, and provides students with a theoretical basis for…

  19. Shaping Up the Practice of Null Hypothesis Significance Testing.

    ERIC Educational Resources Information Center

    Wainer, Howard; Robinson, Daniel H.

    2003-01-01

    Discusses criticisms of null hypothesis significance testing (NHST), suggesting that historical use of NHST was reasonable, and current users should read Sir Ronald Fisher's applied work. Notes that modifications to NHST and interpretations of its outcomes might better suit the needs of modern science. Concludes that NHST is most often useful as…

  20. SOME EFFECTS OF DOGMATISM IN ELEMENTARY SCHOOL PRINCIPALS AND TEACHERS.

    ERIC Educational Resources Information Center

    BENTZEN, MARY M.

    THE HYPOTHESIS THAT RATINGS ON CONGENIALITY AS A COWORKER GIVEN TO TEACHERS WILL BE IN PART A FUNCTION OF THE ORGANIZATIONAL STATUS OF THE RATER WAS TESTED. A SECONDARY PROBLEM WAS TO TEST THE HYPOTHESIS THAT DOGMATIC SUBJECTS MORE THAN NONDOGMATIC SUBJECTS WOULD EXHIBIT COGNITIVE BEHAVIOR WHICH INDICATED (1) GREATER DISTINCTION BETWEEN POSITIVE…

  1. Thou Shalt Not Bear False Witness against Null Hypothesis Significance Testing

    ERIC Educational Resources Information Center

    García-Pérez, Miguel A.

    2017-01-01

    Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…

  2. Test-potentiated learning: three independent replications, a disconfirmed hypothesis, and an unexpected boundary condition.

    PubMed

    Wissman, Kathryn T; Rawson, Katherine A

    2018-04-01

    Arnold and McDermott [(2013). Test-potentiated learning: Distinguishing between direct and indirect effects of testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 940-945] isolated the indirect effects of testing and concluded that encoding is enhanced to a greater extent following more versus fewer practice tests, referred to as test-potentiated learning. The current research provided further evidence for test-potentiated learning and evaluated the covert retrieval hypothesis as an alternative explanation for the observed effect. Learners initially studied foreign language word pairs and then completed either one or five practice tests before restudy occurred. Results of greatest interest concern performance on test trials following restudy for items that were not correctly recalled on the test trials that preceded restudy. Results replicate Arnold and McDermott (2013) by demonstrating that more versus fewer tests potentiate learning when trial time is limited. Results also provide strong evidence against the covert retrieval hypothesis concerning why the effect occurs (i.e., it does not reflect differential covert retrieval during pre-restudy trials). In addition, outcomes indicate that the magnitude of the test-potentiated learning effect decreases as trial length increases, revealing an unexpected boundary condition to test-potentiated learning.

  3. Correcting power and p-value calculations for bias in diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Landman, Bennett A

    2013-07-01

    Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Beyond health aid: would an international equalization scheme for universal health coverage serve the international collective interest?

    PubMed Central

    2014-01-01

    It has been argued that the international community is moving ‘beyond aid’. International co-financing in the international collective interest is expected to replace altruistically motivated foreign aid. The World Health Organization promotes ‘universal health coverage’ as the overarching health goal for the next phase of the Millennium Development Goals. In order to provide a basic level of health care coverage, at least some countries will need foreign aid for decades to come. If international co-financing of global public goods is replacing foreign aid, is universal health coverage a hopeless endeavor? Or would universal health coverage somehow serve the international collective interest? Using the Sustainable Development Solutions Network proposal to finance universal health coverage as a test case, we examined the hypothesis that national social policies face the threat of a ‘race to the bottom’ due to global economic integration and that this threat could be mitigated through international social protection policies that include international cross-subsidies – a kind of ‘equalization’ at the international level. The evidence for the race to the bottom theory is inconclusive. We seem to be witnessing a ‘convergence to the middle’. However, the ‘middle’ where ‘convergence’ of national social policies is likely to occur may not be high enough to keep income inequality in check. The implementation of the international equalization scheme proposed by the Sustainable Development Solutions Network would allow to ensure universal health coverage at a cost of US$55 in low income countries-the minimum cost estimated by the World Health Organization. The domestic efforts expected from low and middle countries are far more substantial than the international co-financing efforts expected from high income countries. This would contribute to ‘convergence’ of national social policies at a higher level. We therefore submit that the proposed international equalization scheme should not be considered as foreign aid, but rather as an international collective effort to protect and promote national social policy in times of global economic integration: thus serving the international collective interest. PMID:24886583

  5. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  6. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  7. Multiple optimality criteria support Ornithoscelida

    NASA Astrophysics Data System (ADS)

    Parry, Luke A.; Baron, Matthew G.; Vinther, Jakob

    2017-10-01

    A recent study of early dinosaur evolution using equal-weights parsimony recovered a scheme of dinosaur interrelationships and classification that differed from historical consensus in a single, but significant, respect; Ornithischia and Saurischia were not recovered as monophyletic sister-taxa, but rather Ornithischia and Theropoda formed a novel clade named Ornithoscelida. However, these analyses only used maximum parsimony, and numerous recent simulation studies have questioned the accuracy of parsimony under equal weights. Here, we provide additional support for this alternative hypothesis using Bayesian implementation of the Mkv model, as well as through number of additional parsimony analyses, including implied weighting. Using Bayesian inference and implied weighting, we recover the same fundamental topology for Dinosauria as the original study, with a monophyletic Ornithoscelida, demonstrating that the main suite of methods used in morphological phylogenetics recover this novel hypothesis. This result was further scrutinized through the systematic exclusion of different character sets. Novel characters from the original study (those not taken or adapted from previous phylogenetic studies) were found to be more important for resolving the relationships within Dinosauromorpha than the relationships within Dinosauria. Reanalysis of a modified version of the character matrix that supports the Ornithischia-Saurischia dichotomy under maximum parsimony also supports this hypothesis under implied weighting, but not under the Mkv model, with both Theropoda and Sauropodomorpha becoming paraphyletic with respect to Ornithischia.

  8. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  9. Mismatch or cumulative stress: toward an integrated hypothesis of programming effects.

    PubMed

    Nederhof, Esther; Schmidt, Mathias V

    2012-07-16

    This paper integrates the cumulative stress hypothesis with the mismatch hypothesis, taking into account individual differences in sensitivity to programming. According to the cumulative stress hypothesis, individuals are more likely to suffer from disease as adversity accumulates. According to the mismatch hypothesis, individuals are more likely to suffer from disease if a mismatch occurs between the early programming environment and the later adult environment. These seemingly contradicting hypotheses are integrated into a new model proposing that the cumulative stress hypothesis applies to individuals who were not or only to a small extent programmed by their early environment, while the mismatch hypothesis applies to individuals who experienced strong programming effects. Evidence for the main effects of adversity as well as evidence for the interaction between adversity in early and later life is presented from human observational studies and animal models. Next, convincing evidence for individual differences in sensitivity to programming is presented. We extensively discuss how our integrated model can be tested empirically in animal models and human studies, inviting researchers to test this model. Furthermore, this integrated model should tempt clinicians and other intervenors to interpret symptoms as possible adaptations from an evolutionary biology perspective. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Greater Biopsy Core Number Is Associated With Improved Biochemical Control in Patients Treated With Permanent Prostate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bittner, Nathan; Merrick, Gregory S., E-mail: gmerrick@urologicresearchinstitute.or; Galbreath, Robert W.

    2010-11-15

    Purpose: Standard prostate biopsy schemes underestimate Gleason score in a significant percentage of cases. Extended biopsy improves diagnostic accuracy and provides more reliable prognostic information. In this study, we tested the hypothesis that greater biopsy core number should result in improved treatment outcome through better tailoring of therapy. Methods and Materials: From April 1995 to May 2006, 1,613 prostate cancer patients were treated with permanent brachytherapy. Patients were divided into five groups stratified by the number of prostate biopsy cores ({<=}6, 7-9, 10-12, 13-20, and >20 cores). Biochemical progression-free survival (bPFS), cause-specific survival (CSS), and overall survival (OS) were evaluatedmore » as a function of core number. Results: The median patient age was 66 years, and the median preimplant prostate-specific antigen was 6.5 ng/mL. The overall 10-year bPFS, CSS, and OS were 95.6%, 98.3%, and 78.6%, respectively. When bPFS was analyzed as a function of core number, the 10-year bPFS for patients with >20, 13-20, 10-12, 7-9 and {<=}6 cores was 100%, 100%, 98.3%, 95.8%, and 93.0% (p < 0.001), respectively. When evaluated by treatment era (1995-2000 vs. 2001-2006), the number of biopsy cores remained a statistically significant predictor of bPFS. On multivariate analysis, the number of biopsy cores was predictive of bPFS but did not predict for CSS or OS. Conclusion: Greater biopsy core number was associated with a statistically significant improvement in bPFS. Comprehensive regional sampling of the prostate may enhance diagnostic accuracy compared to a standard biopsy scheme, resulting in better tailoring of therapy.« less

  11. The Relation Among the Likelihood Ratio-, Wald-, and Lagrange Multiplier Tests and Their Applicability to Small Samples,

    DTIC Science & Technology

    1982-04-01

    S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES

  12. Finite-sample and asymptotic sign-based tests for parameters of non-linear quantile regression with Markov noise

    NASA Astrophysics Data System (ADS)

    Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.

    2017-01-01

    One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.

  13. Convective and microphysics parameterization impact on simulating heavy rainfall in Semarang (case study on February 12th, 2015)

    NASA Astrophysics Data System (ADS)

    Faridatussafura, Nurzaka; Wandala, Agie

    2018-05-01

    The meteorological model WRF-ARW version 3.8.1 is used for simulating the heavy rainfall in Semarang that occurred on February 12th, 2015. Two different convective schemes and two different microphysics scheme in a nested configuration were chosen. The sensitivity of those schemes in capturing the extreme weather event has been tested. GFS data were used for the initial and boundary condition. Verification on the twenty-four hours accumulated rainfall using GSMaPsatellite data shows that Kain-Fritsch convective scheme and Lin microphysics scheme is the best combination scheme among the others. The combination also gives the highest success ratio value in placing high intensity rainfall area. Based on the ROC diagram, KF-Lin shows the best performance in detecting high intensity rainfall. However, the combination still has high bias value.

  14. Audio signal encryption using chaotic Hénon map and lifting wavelet transforms

    NASA Astrophysics Data System (ADS)

    Roy, Animesh; Misra, A. P.

    2017-12-01

    We propose an audio signal encryption scheme based on the chaotic Hénon map. The scheme mainly comprises two phases: one is the preprocessing stage where the audio signal is transformed into data by the lifting wavelet scheme and the other in which the transformed data is encrypted by chaotic data set and hyperbolic functions. Furthermore, we use dynamic keys and consider the key space size to be large enough to resist any kind of cryptographic attacks. A statistical investigation is also made to test the security and the efficiency of the proposed scheme.

  15. No evidence for the 'expensive-tissue hypothesis' from an intraspecific study in a highly variable species.

    PubMed

    Warren, D L; Iglesias, T L

    2012-06-01

    The 'expensive-tissue hypothesis' states that investment in one metabolically costly tissue necessitates decreased investment in other tissues and has been one of the keystone concepts used in studying the evolution of metabolically expensive tissues. The trade-offs expected under this hypothesis have been investigated in comparative studies in a number of clades, yet support for the hypothesis is mixed. Nevertheless, the expensive-tissue hypothesis has been used to explain everything from the evolution of the human brain to patterns of reproductive investment in bats. The ambiguous support for the hypothesis may be due to interspecific differences in selection, which could lead to spurious results both positive and negative. To control for this, we conduct a study of trade-offs within a single species, Thalassoma bifasciatum, a coral reef fish that exhibits more intraspecific variation in a single tissue (testes) than is seen across many of the clades previously analysed in studies of tissue investment. This constitutes a robust test of the constraints posited under the expensive-tissue hypothesis that is not affected by many of the factors that may confound interspecific studies. However, we find no evidence of trade-offs between investment in testes and investment in liver or brain, which are typically considered to be metabolically expensive. Our results demonstrate that the frequent rejection of the expensive-tissue hypothesis may not be an artefact of interspecific differences in selection and suggests that organisms may be capable of compensating for substantial changes in tissue investment without sacrificing mass in other expensive tissues. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.

  16. Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies

    NASA Astrophysics Data System (ADS)

    Harken, B.; Rubin, Y.

    2014-12-01

    There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.

  17. Modeling surface trapped river plumes: A sensitivity study

    USGS Publications Warehouse

    Hyatt, Jason; Signell, Richard P.

    2000-01-01

    To better understand the requirements for realistic regional simulation of river plumes in the Gulf of Maine, we test the sensitivity of the Blumberg-Mellor hydrodynamic model to choice of advection scheme, grid resolution, and wind, using idealized geometry and forcing. The test case discharges 1500 m3/s of fresh water into a uniform 32 psu ocean along a straight shelf at 43?? north. The water depth is 15 m at the coast and increases linearly to 190 m at a distance 100 km offshore. Constant discharge runs are conducted in the presence of ambient alongshore current and with and without periodic alongshore wind forcing. Advection methods tested are CENTRAL, UPWIND, the standard Smolarkiewicz MPDATA and a recursive MPDATA scheme. For the no-wind runs, the UPWIND advection scheme performs poorly for grid resolutions typically used in regional simulations (grid spacing of 1-2 km, comparable to or slightly less than the internal Rossby radius, and vertical resolution of 10% of the water column), damping out much of the plume structure. The CENTRAL difference scheme also has problems when wind forcing is neglected, and generates too much structure, shedding eddies of numerical origin. When a weak 5 cm/s ambient current is present in the no-wind case, both the CENTRAL and standard MPDATA schemes produce a false fresh- and dense-water source just upstream of the river inflow due to a standing two-grid length oscillation in the salinity field. The recursive MPDATA scheme completely eliminates the false dense water source, and produces results closest to the grid-converged solution. The results are shown to be very sensitive to vertical grid resolution, and the presence of wind forcing dramatically changes the nature of the plume simulations. The implication of these idealized tests for realistic simulations is discussed, as well as ramifications on previous studies of idealized plume models.

  18. Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less

  19. Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2017-08-07

    This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less

  20. The mimetic finite difference method for the Landau–Lifshitz equation

    DOE PAGES

    Kim, Eugenia Hail; Lipnikov, Konstantin Nikolayevich

    2017-01-01

    The Landau–Lifshitz equation describes the dynamics of the magnetization inside ferromagnetic materials. This equation is highly nonlinear and has a non-convex constraint (the magnitude of the magnetization is constant) which poses interesting challenges in developing numerical methods. We develop and analyze explicit and implicit mimetic finite difference schemes for this equation. These schemes work on general polytopal meshes which provide enormous flexibility to model magnetic devices with various shapes. A projection on the unit sphere is used to preserve the magnitude of the magnetization. We also provide a proof that shows the exchange energy is decreasing in certain conditions. Themore » developed schemes are tested on general meshes that include distorted and randomized meshes. As a result, the numerical experiments include a test proposed by the National Institute of Standard and Technology and a test showing formation of domain wall structures in a thin film.« less

  1. The mimetic finite difference method for the Landau–Lifshitz equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Eugenia Hail; Lipnikov, Konstantin Nikolayevich

    The Landau–Lifshitz equation describes the dynamics of the magnetization inside ferromagnetic materials. This equation is highly nonlinear and has a non-convex constraint (the magnitude of the magnetization is constant) which poses interesting challenges in developing numerical methods. We develop and analyze explicit and implicit mimetic finite difference schemes for this equation. These schemes work on general polytopal meshes which provide enormous flexibility to model magnetic devices with various shapes. A projection on the unit sphere is used to preserve the magnitude of the magnetization. We also provide a proof that shows the exchange energy is decreasing in certain conditions. Themore » developed schemes are tested on general meshes that include distorted and randomized meshes. As a result, the numerical experiments include a test proposed by the National Institute of Standard and Technology and a test showing formation of domain wall structures in a thin film.« less

  2. Mechanisms of eyewitness suggestibility: tests of the explanatory role hypothesis.

    PubMed

    Rindal, Eric J; Chrobak, Quin M; Zaragoza, Maria S; Weihing, Caitlin A

    2017-10-01

    In a recent paper, Chrobak and Zaragoza (Journal of Experimental Psychology: General, 142(3), 827-844, 2013) proposed the explanatory role hypothesis, which posits that the likelihood of developing false memories for post-event suggestions is a function of the explanatory function the suggestion serves. In support of this hypothesis, they provided evidence that participant-witnesses were especially likely to develop false memories for their forced fabrications when their fabrications helped to explain outcomes they had witnessed. In three experiments, we test the generality of the explanatory role hypothesis as a mechanism of eyewitness suggestibility by assessing whether this hypothesis can predict suggestibility errors in (a) situations where the post-event suggestions are provided by the experimenter (as opposed to fabricated by the participant), and (b) across a variety of memory measures and measures of recollective experience. In support of the explanatory role hypothesis, participants were more likely to subsequently freely report (E1) and recollect the suggestions as part of the witnessed event (E2, source test) when the post-event suggestion helped to provide a causal explanation for a witnessed outcome than when it did not serve this explanatory role. Participants were also less likely to recollect the suggestions as part of the witnessed event (on measures of subjective experience) when their explanatory strength had been reduced by the presence of an alternative explanation that could explain the same outcome (E3, source test + warning). Collectively, the results provide strong evidence that the search for explanatory coherence influences people's tendency to misremember witnessing events that were only suggested to them.

  3. A New Time-Space Accurate Scheme for Hyperbolic Problems. 1; Quasi-Explicit Case

    NASA Technical Reports Server (NTRS)

    Sidilkover, David

    1998-01-01

    This paper presents a new discretization scheme for hyperbolic systems of conservations laws. It satisfies the TVD property and relies on the new high-resolution mechanism which is compatible with the genuinely multidimensional approach proposed recently. This work can be regarded as a first step towards extending the genuinely multidimensional approach to unsteady problems. Discontinuity capturing capabilities and accuracy of the scheme are verified by a set of numerical tests.

  4. Dynamic Restarting Schemes for Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Simon, Horst D.

    1999-03-10

    In studies of restarted Davidson method, a dynamic thick-restart scheme was found to be excellent in improving the overall effectiveness of the eigen value method. This paper extends the study of the dynamic thick-restart scheme to the Lanczos method for symmetric eigen value problems and systematically explore a range of heuristics and strategies. We conduct a series of numerical tests to determine their relative strength and weakness on a class of electronic structure calculation problems.

  5. Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects.

    PubMed

    Soldan, Anja; Mangels, Jennifer A; Cooper, Lynn A

    2008-11-01

    According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalise to priming of unfamiliar visual objects. Implications for theoretical models of object decision priming are discussed.

  6. Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects

    PubMed Central

    Soldan, Anja; Mangels, Jennifer A.; Cooper, Lynn A.

    2008-01-01

    According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object-decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object-decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalize to priming of unfamiliar visual objects. Implications for theoretical models of object-decision priming are discussed. PMID:18821167

  7. Experimental investigation of an astronaut maneuvering scheme.

    NASA Technical Reports Server (NTRS)

    Kane, T. R.; Headrick, M. R.; Yatteau, J. D.

    1972-01-01

    A new concept for astronaut maneuvering in space is proposed, and an experimental study undertaken to test this concept is described. The series of experiments performed appear to promise advantages over previously proposed schemes in terms of propellant economy, system weight, reliability, and safety. The simulation tests established the feasibility of the proposed maneuvering concept by showing that test subjects were able to place their bodies sufficiently near the reference position to avoid excessive angular momentum build-up; no difficulties were encountered in selecting self-rotation maneuvers suitable for effecting desired changes in orientation; and the execution of these maneuvers produced predicted reorientations without tiring the test subject significantly.

  8. Quaternion normalization in additive EKF for spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.

    1991-01-01

    This work introduces, examines, and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter (EKF) to spacecraft attitude determination, which is based on vector measurements. Two new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstrate the performance of all three schemes. A fourth scheme is suggested for future research. Although the schemes were tested for spacecraft attitude determination, the conclusions are general and hold for attitude determination of any three dimensional body when based on vector measurements, and use an additive EKF for estimation, and the quaternion for specifying the attitude.

  9. A robust watermarking scheme using lifting wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anuj; Verma, Deval; Verma, Vivek Singh

    2017-01-01

    The present paper proposes a robust image watermarking scheme using lifting wavelet transform (LWT) and singular value decomposition (SVD). Second level LWT is applied on host/cover image to decompose into different subbands. SVD is used to obtain singular values of watermark image and then these singular values are updated with the singular values of LH2 subband. The algorithm is tested on a number of benchmark images and it is found that the present algorithm is robust against different geometric and image processing operations. A comparison of the proposed scheme is performed with other existing schemes and observed that the present scheme is better not only in terms of robustness but also in terms of imperceptibility.

  10. Sex and Class Differences in Parent-Child Interaction: A Test of Kohn's Hypothesis

    ERIC Educational Resources Information Center

    Gecas, Viktor; Nye, F. Ivan

    1974-01-01

    This paper focuses on Melvin Kohn's suggestive hypothesis that white-collar parents stress the development of internal standards of conduct in their children while blue-collar parents are more likely to react on the basis of the consequences of the child's behavior. This hypothesis was supported. (Author)

  11. Assess the Critical Period Hypothesis in Second Language Acquisition

    ERIC Educational Resources Information Center

    Du, Lihong

    2010-01-01

    The Critical Period Hypothesis aims to investigate the reason for significant difference between first language acquisition and second language acquisition. Over the past few decades, researchers carried out a series of studies to test the validity of the hypothesis. Although there were certain limitations in these studies, most of their results…

  12. Further Evidence on the Weak and Strong Versions of the Screening Hypothesis in Greece.

    ERIC Educational Resources Information Center

    Lambropoulos, Haris S.

    1992-01-01

    Uses Greek data for 1981 and 1985 to test screening hypothesis by replicating method proposed by Psacharopoulos. Credentialism, or sheepskin effect of education, directly challenges human capital theory, which views education as a productivity augmenting process. Results do not support the strong version of the screening hypothesis and suggest…

  13. A Clinical Evaluation of the Competing Sources of Input Hypothesis

    ERIC Educational Resources Information Center

    Fey, Marc E.; Leonard, Laurence B.; Bredin-Oja, Shelley L.; Deevy, Patricia

    2017-01-01

    Purpose: Our purpose was to test the competing sources of input (CSI) hypothesis by evaluating an intervention based on its principles. This hypothesis proposes that children's use of main verbs without tense is the result of their treating certain sentence types in the input (e.g., "Was 'she laughing'?") as models for declaratives…

  14. Experimental comparisons of hypothesis test and moving average based combustion phase controllers.

    PubMed

    Gao, Jinwu; Wu, Yuhu; Shen, Tielong

    2016-11-01

    For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Use of Pearson's Chi-Square for Testing Equality of Percentile Profiles across Multiple Populations.

    PubMed

    Johnson, William D; Beyl, Robbie A; Burton, Jeffrey H; Johnson, Callie M; Romer, Jacob E; Zhang, Lei

    2015-08-01

    In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10 th , 50 th , and 90 th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.

  16. A Dynamic Model for Prediction of Psoriasis Management by Blue Light Irradiation

    PubMed Central

    Félix Garza, Zandra C.; Liebmann, Joerg; Born, Matthias; Hilbers, Peter A. J.; van Riel, Natal A. W.

    2017-01-01

    Clinical investigations prove that blue light irradiation reduces the severity of psoriasis vulgaris. Nevertheless, the mechanisms involved in the management of this condition remain poorly defined. Despite the encouraging results of the clinical studies, no clear guidelines are specified in the literature for the irradiation scheme regime of blue light-based therapy for psoriasis. We investigated the underlying mechanism of blue light irradiation of psoriatic skin, and tested the hypothesis that regulation of proliferation is a key process. We implemented a mechanistic model of cellular epidermal dynamics to analyze whether a temporary decrease of keratinocytes hyper-proliferation can explain the outcome of phototherapy with blue light. Our results suggest that the main effect of blue light on keratinocytes impacts the proliferative cells. They show that the decrease in the keratinocytes proliferative capacity is sufficient to induce a transient decrease in the severity of psoriasis. To study the impact of the therapeutic regime on the efficacy of psoriasis treatment, we performed simulations for different combinations of the treatment parameters, i.e., length of treatment, fluence (also referred to as dose), and intensity. These simulations indicate that high efficacy is achieved by regimes with long duration and high fluence levels, regardless of the chosen intensity. Our modeling approach constitutes a framework for testing diverse hypotheses on the underlying mechanism of blue light-based phototherapy, and for designing effective strategies for the treatment of psoriasis. PMID:28184200

  17. A distributed fault-detection and diagnosis system using on-line parameter estimation

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1991-01-01

    The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.

  18. 'Good genes as heterozygosity': the major histocompatibility complex and mate choice in Atlantic salmon (Salmo salar).

    PubMed

    Landry, C; Garant, D; Duchesne, P; Bernatchez, L

    2001-06-22

    According to the theory of mate choice based on heterozygosity, mates should choose each other in order to increase the heterozygosity of their offspring. In this study, we tested the 'good genes as heterozygosity' hypothesis of mate choice by documenting the mating patterns of wild Atlantic salmon (Salmo salar) using both major histocompatibility complex (MHC) and microsatellite loci. Specifically, we tested the null hypotheses that mate choice in Atlantic salmon is not dependent on the relatedness between potential partners or on the MHC similarity between mates. Three parameters were assessed: (i) the number of shared alleles between partners (x and y) at the MHC (M(xy)), (ii) the MHC amino-acid genotypic distance between mates' genotypes (AA(xy)), and (iii) genetic relatedness between mates (r(xy)). We found that Atlantic salmon choose their mates in order to increase the heterozygosity of their offspring at the MHC and, more specifically, at the peptide-binding region, presumably in order to provide them with better defence against parasites and pathogens. This was supported by a significant difference between the observed and expected AA(xy) (p = 0.0486). Furthermore, mate choice was not a mechanism of overall inbreeding avoidance as genetic relatedness supported a random mating scheme (p = 0.445). This study provides the first evidence that MHC genes influence mate choice in fish.

  19. Building fast well-balanced two-stage numerical schemes for a model of two-phase flows

    NASA Astrophysics Data System (ADS)

    Thanh, Mai Duc

    2014-06-01

    We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.

  20. On selecting evidence to test hypotheses: A theory of selection tasks.

    PubMed

    Ragni, Marco; Kola, Ilir; Johnson-Laird, Philip N

    2018-05-21

    How individuals choose evidence to test hypotheses is a long-standing puzzle. According to an algorithmic theory that we present, it is based on dual processes: individuals' intuitions depending on mental models of the hypothesis yield selections of evidence matching instances of the hypothesis, but their deliberations yield selections of potential counterexamples to the hypothesis. The results of 228 experiments using Wason's selection task corroborated the theory's predictions. Participants made dependent choices of items of evidence: the selections in 99 experiments were significantly more redundant (using Shannon's measure) than those of 10,000 simulations of each experiment based on independent selections. Participants tended to select evidence corresponding to instances of hypotheses, or to its counterexamples, or to both. Given certain contents, instructions, or framings of the task, they were more likely to select potential counterexamples to the hypothesis. When participants received feedback about their selections in the "repeated" selection task, they switched from selections of instances of the hypothesis to selection of potential counterexamples. These results eliminated most of the 15 alternative theories of selecting evidence. In a meta-analysis, the model theory yielded a better fit of the results of 228 experiments than the one remaining theory based on reasoning rather than meaning. We discuss the implications of the model theory for hypothesis testing and for a well-known paradox of confirmation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Does Maltreatment Beget Maltreatment? A Systematic Review of the Intergenerational Literature

    PubMed Central

    Thornberry, Terence P.; Knight, Kelly E.; Lovegrove, Peter J.

    2014-01-01

    In this paper, we critically review the literature testing the cycle of maltreatment hypothesis which posits continuity in maltreatment across adjacent generations. That is, we examine whether a history of maltreatment victimization is a significant risk factor for the later perpetration of maltreatment. We begin by establishing 11 methodological criteria that studies testing this hypothesis should meet. They include such basic standards as using representative samples, valid and reliable measures, prospective designs, and different reporters for each generation. We identify 47 studies that investigated this issue and then evaluate them with regard to the 11 methodological criteria. Overall, most of these studies report findings consistent with the cycle of maltreatment hypothesis. Unfortunately, at the same time, few of them satisfy the basic methodological criteria that we established; indeed, even the stronger studies in this area only meet about half of them. Moreover, the methodologically stronger studies present mixed support for the hypothesis. As a result, the positive association often reported in the literature appears to be based largely on the methodologically weaker designs. Based on our systematic methodological review, we conclude that this small and methodologically weak body of literature does not provide a definitive test of the cycle of maltreatment hypothesis. We conclude that it is imperative to develop more robust and methodologically adequate assessments of this hypothesis to more accurately inform the development of prevention and treatment programs. PMID:22673145

  2. Optimizing Aircraft Availability: Where to Spend Your Next O&M Dollar

    DTIC Science & Technology

    2010-03-01

    patterns of variance are present. In addition, we use the Breusch - Pagan test to statistically determine whether homoscedasticity exists. For this... Breusch - Pagan test , large p-values are preferred so that we may accept the null hypothesis of normality. Failure to meet the fourth assumption is...Next, we show the residual by predicted plot and the Breusch - Pagan test for constant variance of the residuals. The null hypothesis is that the

  3. Estimating Required Contingency Funds for Construction Projects using Multiple Linear Regression

    DTIC Science & Technology

    2006-03-01

    Breusch - Pagan test , in which the null hypothesis states that the residuals have constant variance. The alternate hypothesis is that the residuals do not...variance, the Breusch - Pagan test provides statistical evidence that the assumption is justified. For the proposed model, the p-value is 0.173...entire test sample. v Acknowledgments First, I would like to acknowledge the influence and help of Greg Hoffman. His work served as the

  4. Bayesian Hypothesis Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Stephen A.; Sigeti, David E.

    These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ 2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H 0.

  5. Computer-Assisted Problem Solving in School Mathematics

    ERIC Educational Resources Information Center

    Hatfield, Larry L.; Kieren, Thomas E.

    1972-01-01

    A test of the hypothesis that writing and using computer programs related to selected mathematical content positively affects performance on those topics. Results particularly support the hypothesis. (MM)

  6. [Examination of the hypothesis 'the factors and mechanisms of superiority'].

    PubMed

    Sierra-Fitzgerald, O; Quevedo-Caicedo, J; López-Calderón, M G

    INTRODUCTION. The hypothesis of Geschwind and Galaburda suggests that specific cognitive superiority arises as a result of an alteration in development of the nervous system. In this article we review the co existence of superiority and inferiority . PATIENTS AND METHODS. A study was made of six children aged between 6 and 8 years old at the Instituto de Belles Artes Antonio Maria Valencia in Cali,Columbia with an educational level between second and third grade at a primary school and of medium low socio economic status. The children were considered to have superior musical ability by music experts, which is the way in which the concept of superiority was to be tested. The concept of inferiority was tested by neuropsychological tests = 1.5 DE below normal for the same age. We estimated the perinatal neurological risk in each case. Subsequently the children s general intelligence and specific cognitive abilities were evaluated. In the first case the WISC R and MSCA were used. The neuropsychological profiles were obtained by broad evaluation using a verbal fluency test, a test using counters, Boston vocabulary test, the Wechster memory scale, sequential verbal memory test, super imposed figures test, Piaget Head battery, Rey Osterrieth complex figure and the Wisconsin card classification test. The RESULTS showed slight/moderate deficits in practical construction ability and mild defects of memory and concept abilities. In general the results supported the hypothesis tested. The mechanisms of superiority proposed in the classical hypothesis mainly involve the contralateral hemisphere: in this study the ipsilateral mechanism was more important.

  7. Mothers Who Kill Their Offspring: Testing Evolutionary Hypothesis in a 110-Case Italian Sample

    ERIC Educational Resources Information Center

    Camperio Ciani, Andrea S.; Fontanesi, Lilybeth

    2012-01-01

    Objectives: This research aimed to identify incidents of mothers in Italy killing their own children and to test an adaptive evolutionary hypothesis to explain their occurrence. Methods: 110 cases of mothers killing 123 of their own offspring from 1976 to 2010 were analyzed. Each case was classified using 13 dichotomic variables. Descriptive…

  8. Religious Socialization: A Test of the Channeling Hypothesis of Parental Influence on Adolescent Faith Maturity.

    ERIC Educational Resources Information Center

    Martin, Todd F.; White, James M.; Perlman, Daniel

    2003-01-01

    This study used a sample of 2,379 seventh through twelfth graders in 5 Protestant denominations to test the hypothesis that parental influences on religious faith are mediated through peer selection and congregation selection. Findings revealed that peer and parental influence remained stable during the adolescent years. Parental influence did not…

  9. Bayesian Hypothesis Testing for Psychologists: A Tutorial on the Savage-Dickey Method

    ERIC Educational Resources Information Center

    Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul

    2010-01-01

    In the field of cognitive psychology, the "p"-value hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the "p"-value provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is…

  10. Parenting as a Dynamic Process: A Test of the Resource Dilution Hypothesis

    ERIC Educational Resources Information Center

    Strohschein, Lisa; Gauthier, Anne H.; Campbell, Rachel; Kleparchuk, Clayton

    2008-01-01

    In this paper, we tested the resource dilution hypothesis, which posits that, because parenting resources are finite, the addition of a new sibling depletes parenting resources for other children in the household. We estimated growth curve models on the self-reported parenting practices of mothers using four waves of data collected biennially…

  11. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    ERIC Educational Resources Information Center

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  12. Plant disease severity assessment - How rater bias, assessment method and experimental design affect hypothesis testing and resource use efficiency

    USDA-ARS?s Scientific Manuscript database

    The impact of rater bias and assessment method on hypothesis testing was studied for different experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed ‘balanced’, and those ...

  13. Is Variability in Mate Choice Similar for Intelligence and Personality Traits? Testing a Hypothesis about the Evolutionary Genetics of Personality

    ERIC Educational Resources Information Center

    Stone, Emily A.; Shackelford, Todd K.; Buss, David M.

    2012-01-01

    This study tests the hypothesis presented by Penke, Denissen, and Miller (2007a) that condition-dependent traits, including intelligence, attractiveness, and health, are universally and uniformly preferred as characteristics in a mate relative to traits that are less indicative of condition, including personality traits. We analyzed…

  14. The Effect of Repetitive Saccade Execution on the Attention Network Test: Enhancing Executive Function with a Flick of the Eyes

    ERIC Educational Resources Information Center

    Edlin, James M.; Lyle, Keith B.

    2013-01-01

    The simple act of repeatedly looking left and right can enhance subsequent cognition, including divergent thinking, detection of matching letters from visual arrays, and memory retrieval. One hypothesis is that saccade execution enhances subsequent cognition by altering attentional control. To test this hypothesis, we compared performance…

  15. The Genesis of Pedophilia: Testing the "Abuse-to-Abuser" Hypothesis.

    ERIC Educational Resources Information Center

    Fedoroff, J. Paul; Pinkus, Shari

    1996-01-01

    This study tested three versions of the "abuse-to-abuser" hypothesis by comparing men with personal histories of sexual abuse and men without sexual abuse histories. There was a statistically non-significant trend for assaulted offenders to be more likely as adults to commit genital assaults on children. Implications for the abuse-to-abuser…

  16. 2004-2006 Puget Sound Traffic Choices Study | Transportation Secure Data

    Science.gov Websites

    Center | NREL 04-2006 Puget Sound Traffic Choices Study 2004-2006 Puget Sound Traffic Choices Study The 2004-2006 Puget Sound Traffic Choices Study tested the hypothesis that time-of-day variable Administration for a pilot project on congestion-based tolling. Methodology To test the hypothesis, the study

  17. Assessment of Theory of Mind in Children with Communication Disorders: Role of Presentation Mode

    ERIC Educational Resources Information Center

    van Buijsen, Marit; Hendriks, Angelique; Ketelaars, Mieke; Verhoeven, Ludo

    2011-01-01

    Children with communication disorders have problems with both language and social interaction. The theory-of-mind hypothesis provides an explanation for these problems, and different tests have been developed to test this hypothesis. However, different modes of presentation are used in these tasks, which make the results difficult to compare. In…

  18. Does mediator use contribute to the spacing effect for cued recall? Critical tests of the mediator hypothesis.

    PubMed

    Morehead, Kayla; Dunlosky, John; Rawson, Katherine A; Bishop, Melissa; Pyc, Mary A

    2018-04-01

    When study is spaced across sessions (versus massed within a single session), final performance is greater after spacing. This spacing effect may have multiple causes, and according to the mediator hypothesis, part of the effect can be explained by the use of mediator-based strategies. This hypothesis proposes that when study is spaced across sessions, rather than massed within a session, more mediators will be generated that are longer lasting and hence more mediators will be available to support criterion recall. In two experiments, participants were randomly assigned to study paired associates using either a spaced or massed schedule. They reported strategy use for each item during study trials and during the final test. Consistent with the mediator hypothesis, participants who had spaced (as compared to massed) practice reported using more mediators on the final test. This use of effective mediators also statistically accounted for some - but not all of - the spacing effect on final performance.

  19. A more powerful test based on ratio distribution for retention noninferiority hypothesis.

    PubMed

    Deng, Ling; Chen, Gang

    2013-03-11

    Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.

  20. Linear and nonlinear properties of numerical methods for the rotating shallow water equations

    NASA Astrophysics Data System (ADS)

    Eldred, Chris

    The shallow water equations provide a useful analogue of the fully compressible Euler equations since they have similar conservation laws, many of the same types of waves and a similar (quasi-) balanced state. It is desirable that numerical models posses similar properties, and the prototypical example of such a scheme is the 1981 Arakawa and Lamb (AL81) staggered (C-grid) total energy and potential enstrophy conserving scheme, based on the vector invariant form of the continuous equations. However, this scheme is restricted to a subset of logically square, orthogonal grids. The current work extends the AL81 scheme to arbitrary non-orthogonal polygonal grids, by combining Hamiltonian methods (work done by Salmon, Gassmann, Dubos and others) and Discrete Exterior Calculus (Thuburn, Cotter, Dubos, Ringler, Skamarock, Klemp and others). It is also possible to obtain these properties (along with arguably superior wave dispersion properties) through the use of a collocated (Z-grid) scheme based on the vorticity-divergence form of the continuous equations. Unfortunately, existing examples of these schemes in the literature for general, spherical grids either contain computational modes; or do not conserve total energy and potential enstrophy. This dissertation extends an existing scheme for planar grids to spherical grids, through the use of Nambu brackets (as pioneered by Rick Salmon). To compare these two schemes, the linear modes (balanced states, stationary modes and propagating modes; with and without dissipation) are examined on both uniform planar grids (square, hexagonal) and quasi-uniform spherical grids (geodesic, cubed-sphere). In addition to evaluating the linear modes, the results of the two schemes applied to a set of standard shallow water test cases and a recently developed forced-dissipative turbulence test case from John Thuburn (intended to evaluate the ability the suitability of schemes as the basis for a climate model) on both hexagonal-pentagonal icosahedral grids and cubed-sphere grids are presented. Finally, some remarks and thoughts about the suitability of these two schemes as the basis for atmospheric dynamical development are given.

  1. Investigating the scale-adaptivity of a shallow cumulus parameterization scheme with LES

    NASA Astrophysics Data System (ADS)

    Brast, Maren; Schemann, Vera; Neggers, Roel

    2017-04-01

    In this study we investigate the scale-adaptivity of a new parameterization scheme for shallow cumulus clouds in the gray zone. The Eddy-Diffusivity Multiple Mass-Flux (or ED(MF)n ) scheme is a bin-macrophysics scheme, in which subgrid transport is formulated in terms of discretized size densities. While scale-adaptivity in the ED-component is achieved using a pragmatic blending approach, the MF-component is filtered such that only the transport by plumes smaller than the grid size is maintained. For testing, ED(MF)n is implemented in a large-eddy simulation (LES) model, replacing the original subgrid-scheme for turbulent transport. LES thus plays the role of a non-hydrostatic testing ground, which can be run at different resolutions to study the behavior of the parameterization scheme in the boundary-layer gray zone. In this range convective cumulus clouds are partially resolved. We find that at high resolutions the clouds and the turbulent transport are predominantly resolved by the LES, and the transport represented by ED(MF)n is small. This partitioning changes towards coarser resolutions, with the representation of shallow cumulus clouds becoming exclusively carried by the ED(MF)n. The way the partitioning changes with grid-spacing matches the results of previous LES studies, suggesting some scale-adaptivity is captured. Sensitivity studies show that a scale-inadaptive ED component stays too active at high resolutions, and that the results are fairly insensitive to the number of transporting updrafts in the ED(MF)n scheme. Other assumptions in the scheme, such as the distribution of updrafts across sizes and the value of the area fraction covered by updrafts, are found to affect the location of the gray zone.

  2. Comparing a single-day swabbing regimen with an established 3-day protocol for MRSA decolonization control.

    PubMed

    Frickmann, H; Schwarz, N G; Hahn, A; Ludyga, A; Warnke, P; Podbielski, A

    2018-05-01

    Success of methicillin-resistant Staphylococcus aureus (MRSA) decolonization procedures is usually verified by control swabs of the colonized body region. This prospective controlled study compared a single-day regimen with a well-established 3-day scheme for noninferiority and adherence to the testing scheme. Two sampling schemes for screening MRSA patients of a single study cohort at a German tertiary-care hospital 2 days after decolonization were compared regarding their ability to identify MRSA colonization in throat or nose. In each patient, three nose and three throat swabs were taken at 3- to 4-hour intervals during screening day 1, and in the same patients once daily on days 1, 2 and 3. Swabs were analysed using chromogenic agar and broth enrichment. The study aimed to investigate whether the single-day swabbing scheme is not inferior to the 3-day scheme with a 15% noninferiority margin. One hundred sixty patients were included, comprising 105 and 101 patients with results on all three swabs for decolonization screening of the nose and throat, respectively. Noninferiority of the single-day swabbing scheme was confirmed for both pharyngeal and nasal swabs, with 91.8% and 89% agreement, respectively. The absolute difference of positivity rates between the swabbing regimens was 0.025 (-0.082, 0.131) for the nose and 0.006 (-0.102, 0.114) (95% confidence interval) for the pharynx as calculated with McNemar's test for matched or paired data. Compliance with the single-day scheme was better, with 12% lacking second-day swabs and 27% lacking third-day swabs from the nostrils. The better adherence to the single-day screening scheme with noninferiority suggests its implementation as the new gold standard. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE PAGES

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...

    2016-11-07

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  4. Fault-tolerant Greenberger-Horne-Zeilinger paradox based on non-Abelian anyons.

    PubMed

    Deng, Dong-Ling; Wu, Chunfeng; Chen, Jing-Ling; Oh, C H

    2010-08-06

    We propose a scheme to test the Greenberger-Horne-Zeilinger paradox based on braidings of non-Abelian anyons, which are exotic quasiparticle excitations of topological states of matter. Because topological ordered states are robust against local perturbations, this scheme is in some sense "fault-tolerant" and might close the detection inefficiency loophole problem in previous experimental tests of the Greenberger-Horne-Zeilinger paradox. In turn, the construction of the Greenberger-Horne-Zeilinger paradox reveals the nonlocal property of non-Abelian anyons. Our results indicate that the non-Abelian fractional statistics is a pure quantum effect and cannot be described by local realistic theories. Finally, we present a possible experimental implementation of the scheme based on the anyonic interferometry technologies.

  5. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  6. Economic evaluation of progeny-testing and genomic selection schemes for small-sized nucleus dairy cattle breeding programs in developing countries.

    PubMed

    Kariuki, C M; Brascamp, E W; Komen, H; Kahi, A K; van Arendonk, J A M

    2017-03-01

    In developing countries minimal and erratic performance and pedigree recording impede implementation of large-sized breeding programs. Small-sized nucleus programs offer an alternative but rely on their economic performance for their viability. We investigated the economic performance of 2 alternative small-sized dairy nucleus programs [i.e., progeny testing (PT) and genomic selection (GS)] over a 20-yr investment period. The nucleus was made up of 453 male and 360 female animals distributed in 8 non-overlapping age classes. Each year 10 active sires and 100 elite dams were selected. Populations of commercial recorded cows (CRC) of sizes 12,592 and 25,184 were used to produce test daughters in PT or to create a reference population in GS, respectively. Economic performance was defined as gross margins, calculated as discounted revenues minus discounted costs following a single generation of selection. Revenues were calculated as cumulative discounted expressions (CDE, kg) × 0.32 (€/kg of milk) × 100,000 (size commercial population). Genetic superiorities, deterministically simulated using pseudo-BLUP index and CDE, were determined using gene flow. Costs were for one generation of selection. Results show that GS schemes had higher cumulated genetic gain in the commercial cow population and higher gross margins compared with PT schemes. Gross margins were between 3.2- and 5.2-fold higher for GS, depending on size of the CRC population. The increase in gross margin was mostly due to a decreased generation interval and lower running costs in GS schemes. In PT schemes many bulls are culled before selection. We therefore also compared 2 schemes in which semen was stored instead of keeping live bulls. As expected, semen storage resulted in an increase in gross margins in PT schemes, but gross margins remained lower than those of GS schemes. We conclude that implementation of small-sized GS breeding schemes can be economically viable for developing countries. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  7. Report on Pairing-based Cryptography.

    PubMed

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.

  8. Report on Pairing-based Cryptography

    PubMed Central

    Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily

    2015-01-01

    This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435

  9. Choice of no-slip curved boundary condition for lattice Boltzmann simulations of high-Reynolds-number flows.

    PubMed

    Sanjeevi, Sathish K P; Zarghami, Ahad; Padding, Johan T

    2018-04-01

    Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.

  10. Determining the Impact of Personal Mobility Carbon Allowance Schemes in Transportation Networks

    DOE PAGES

    Aziz, H. M. Abdul; Ukkusuri, Satish V.; Zhan, Xianyuan

    2016-10-17

    We know that personal mobility carbon allowance (PMCA) schemes are designed to reduce carbon consumption from transportation networks. PMCA schemes influence the travel decision process of users and accordingly impact the system metrics including travel time and greenhouse gas (GHG) emissions. Here, we develop a multi-user class dynamic user equilibrium model to evaluate the transportation system performance when PMCA scheme is implemented. The results using Sioux-Falls test network indicate that PMCA schemes can achieve the emissions reduction goals for transportation networks. Further, users characterized by high value of travel time are found to be less sensitive to carbon budget inmore » the context of work trips. Results also show that PMCA scheme can lead to higher emissions for a path compared with the case without PMCA because of flow redistribution. The developed network equilibrium model allows us to examine the change in system states at different carbon allocation levels and to design parameters of PMCA schemes accounting for population heterogeneity.« less

  11. Determining the Impact of Personal Mobility Carbon Allowance Schemes in Transportation Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, H. M. Abdul; Ukkusuri, Satish V.; Zhan, Xianyuan

    We know that personal mobility carbon allowance (PMCA) schemes are designed to reduce carbon consumption from transportation networks. PMCA schemes influence the travel decision process of users and accordingly impact the system metrics including travel time and greenhouse gas (GHG) emissions. Here, we develop a multi-user class dynamic user equilibrium model to evaluate the transportation system performance when PMCA scheme is implemented. The results using Sioux-Falls test network indicate that PMCA schemes can achieve the emissions reduction goals for transportation networks. Further, users characterized by high value of travel time are found to be less sensitive to carbon budget inmore » the context of work trips. Results also show that PMCA scheme can lead to higher emissions for a path compared with the case without PMCA because of flow redistribution. The developed network equilibrium model allows us to examine the change in system states at different carbon allocation levels and to design parameters of PMCA schemes accounting for population heterogeneity.« less

  12. Choice of no-slip curved boundary condition for lattice Boltzmann simulations of high-Reynolds-number flows

    NASA Astrophysics Data System (ADS)

    Sanjeevi, Sathish K. P.; Zarghami, Ahad; Padding, Johan T.

    2018-04-01

    Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.

  13. On the application of ENO scheme with subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1991-01-01

    Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.

  14. Numerical viscosity and resolution of high-order weighted essentially nonoscillatory schemes for compressible flows with high Reynolds numbers.

    PubMed

    Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye

    2003-10-01

    A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.

  15. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Hixon, Duane; Sankar, L. N.

    1993-01-01

    During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.

  16. An improvement in mass flux convective parameterizations and its impact on seasonal simulations using a coupled model

    NASA Astrophysics Data System (ADS)

    Elsayed Yousef, Ahmed; Ehsan, M. Azhar; Almazroui, Mansour; Assiri, Mazen E.; Al-Khalaf, Abdulrahman K.

    2017-02-01

    A new closure and a modified detrainment for the simplified Arakawa-Schubert (SAS) cumulus parameterization scheme are proposed. In the modified convective scheme which is named as King Abdulaziz University (KAU) scheme, the closure depends on both the buoyancy force and the environment mean relative humidity. A lateral entrainment rate varying with environment relative humidity is proposed and tends to suppress convection in a dry atmosphere. The detrainment rate also varies with environment relative humidity. The KAU scheme has been tested in a single column model (SCM) and implemented in a coupled global climate model (CGCM). Increased coupling between environment and clouds in the KAU scheme results in improved sensitivity of the depth and strength of convection to environmental humidity compared to the original SAS scheme. The new scheme improves precipitation simulation with better representations of moisture and temperature especially during suppressed convection periods. The KAU scheme implemented in the Seoul National University (SNU) CGCM shows improved precipitation over the tropics. The simulated precipitation pattern over the Arabian Peninsula and Northeast African region is also improved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Christopher; Randall, David

    The shallow water equations provide a useful analogue of the fully compressible Euler equations since they have similar characteristics: conservation laws, inertia-gravity and Rossby waves, and a (quasi-) balanced state. In order to obtain realistic simulation results, it is desirable that numerical models have discrete analogues of these properties. Two prototypical examples of such schemes are the 1981 Arakawa and Lamb (AL81) C-grid total energy and potential enstrophy conserving scheme, and the 2007 Salmon (S07) Z-grid total energy and potential enstrophy conserving scheme. Unfortunately, the AL81 scheme is restricted to logically square, orthogonal grids, and the S07 scheme is restrictedmore » to uniform square grids. The current work extends the AL81 scheme to arbitrary non-orthogonal polygonal grids and the S07 scheme to arbitrary orthogonal spherical polygonal grids in a manner that allows for both total energy and potential enstrophy conservation, by combining Hamiltonian methods (work done by Salmon, Gassmann, Dubos, and others) and discrete exterior calculus (Thuburn, Cotter, Dubos, Ringler, Skamarock, Klemp, and others). Lastly, detailed results of the schemes applied to standard test cases are deferred to part 2 of this series of papers.« less

  18. Memoirs of a frequent flier: Phylogenomics reveals 18 long-distance dispersals between North America and South America in the popcorn flowers (Amsinckiinae).

    PubMed

    Guilliams, C Matt; Hasenstab-Lehman, Kristen E; Mabry, Makenzie E; Simpson, Michael G

    2017-11-23

    American amphitropical disjunction (AAD) is an important but understudied New World biogeographic pattern in which related plants occur in extratropical North America and South America, but are absent in the intervening tropics. Subtribe Amsinckiinae (Boraginaceae) is one of the richest groups of plants displaying the AAD pattern. Here, we infer a time-calibrated molecular phylogeny of the group to evaluate the number, timing, and directionality of AAD events, which yields generalizable insights into the mechanism of AAD. We perform a phylogenomic analysis of 139 samples of subtribe Amsinckiinae and infer divergence times using two calibration schemes: with only fossil calibrations and with fossils plus a secondary calibration from a recent family level analysis. Biogeographic analysis was performed in the R package BioGeoBEARS. We document 18 examples of AAD in the Amsinckiinae. Inferred divergence times of these AAD examples were strongly asynchronous, ranging from Miocene (17.1 million years ago [Ma]) to Pleistocene (0.33 Ma), with most (12) occurring <5 Ma. Four events occurred 10-5 Ma, during the second rise of the Andes. All AAD examples had a North America to South America directionality. Second only to the hyperdiverse Poaceae in number of documented AAD examples, the Amsinckiinae is an ideal system for the study of AAD. Asynchronous divergence times support the hypothesis of long-distance dispersal by birds as the mechanism of AAD in the subtribe and more generally. Further comparative phylogenomic studies may permit biogeographic hypothesis testing and examination of the relationship between AAD and fruit morphology, reproductive biology, and ploidy. © 2017 Botanical Society of America.

  19. Risk pathways among traumatic stress, posttraumatic stress disorder symptoms, and alcohol and drug problems: a test of four hypotheses.

    PubMed

    Haller, Moira; Chassin, Laurie

    2014-09-01

    The present study utilized longitudinal data from a community sample (n = 377; 166 trauma-exposed; 54% males; 73% non-Hispanic Caucasian; 22% Hispanic; 5% other ethnicity) to test whether pretrauma substance use problems increase risk for trauma exposure (high-risk hypothesis) or posttraumatic stress disorder (PTSD) symptoms (susceptibility hypothesis), whether PTSD symptoms increase risk for later alcohol/drug problems (self-medication hypothesis), and whether the association between PTSD symptoms and alcohol/drug problems is attributable to shared risk factors (shared vulnerability hypothesis). Logistic and negative binomial regressions were performed in a path analysis framework. Results provided the strongest support for the self-medication hypothesis, such that PTSD symptoms predicted higher levels of later alcohol and drug problems, over and above the influences of pretrauma family risk factors, pretrauma substance use problems, trauma exposure, and demographic variables. Results partially supported the high-risk hypothesis, such that adolescent substance use problems increased risk for assaultive violence exposure but did not influence overall risk for trauma exposure. There was no support for the susceptibility hypothesis. Finally, there was little support for the shared vulnerability hypothesis. Neither trauma exposure nor preexisting family adversity accounted for the link between PTSD symptoms and later substance use problems. Rather, PTSD symptoms mediated the effect of pretrauma family adversity on later alcohol and drug problems, thereby supporting the self-medication hypothesis. These findings make important contributions to better understanding the directions of influence among traumatic stress, PTSD symptoms, and substance use problems.

  20. Access-in-turn test architecture for low-power test application

    NASA Astrophysics Data System (ADS)

    Wang, Weizheng; Wang, JinCheng; Wang, Zengyun; Xiang, Lingyun

    2017-03-01

    This paper presents a novel access-in-turn test architecture (AIT-TA) for testing of very large scale integrated (VLSI) designs. In the proposed scheme, each scan cell in a chain receives test data from shift-in line in turn while pushing its test response to the shift-out line. It solves the power problem of conventional scan architecture to a great extent and suppresses significantly the switching activity during shift and capture operation with acceptable hardware overhead. Thus, it can help to implement the test at much higher operation frequencies resulting shorter test application time. The proposed test approach enhances the architecture of conventional scan flip-flops and backward compatible with existing test pattern generation and simulation techniques. Experimental results obtained for some larger ISCAS'89 and ITC'99 benchmark circuits illustrate effectiveness of the proposed low-power test application scheme.

  1. Test of the Brink-Axel Hypothesis for the Pygmy Dipole Resonance

    NASA Astrophysics Data System (ADS)

    Martin, D.; von Neumann-Cosel, P.; Tamii, A.; Aoi, N.; Bassauer, S.; Bertulani, C. A.; Carter, J.; Donaldson, L.; Fujita, H.; Fujita, Y.; Hashimoto, T.; Hatanaka, K.; Ito, T.; Krugmann, A.; Liu, B.; Maeda, Y.; Miki, K.; Neveling, R.; Pietralla, N.; Poltoratska, I.; Ponomarev, V. Yu.; Richter, A.; Shima, T.; Yamamoto, T.; Zweidinger, M.

    2017-11-01

    The gamma strength function and level density of 1- states in 96Mo have been extracted from a high-resolution study of the (p → , p→ ' ) reaction at 295 MeV and extreme forward angles. By comparison with compound nucleus γ decay experiments, this allows a test of the generalized Brink-Axel hypothesis in the energy region of the pygmy dipole resonance. The Brink-Axel hypothesis is commonly assumed in astrophysical reaction network calculations and states that the gamma strength function in nuclei is independent of the structure of the initial and final state. The present results validate the Brink-Axel hypothesis for 96Mo and provide independent confirmation of the methods used to separate gamma strength function and level density in γ decay experiments.

  2. Total energy and potential enstrophy conserving schemes for the shallow water equations using Hamiltonian methods - Part 1: Derivation and properties

    NASA Astrophysics Data System (ADS)

    Eldred, Christopher; Randall, David

    2017-02-01

    The shallow water equations provide a useful analogue of the fully compressible Euler equations since they have similar characteristics: conservation laws, inertia-gravity and Rossby waves, and a (quasi-) balanced state. In order to obtain realistic simulation results, it is desirable that numerical models have discrete analogues of these properties. Two prototypical examples of such schemes are the 1981 Arakawa and Lamb (AL81) C-grid total energy and potential enstrophy conserving scheme, and the 2007 Salmon (S07) Z-grid total energy and potential enstrophy conserving scheme. Unfortunately, the AL81 scheme is restricted to logically square, orthogonal grids, and the S07 scheme is restricted to uniform square grids. The current work extends the AL81 scheme to arbitrary non-orthogonal polygonal grids and the S07 scheme to arbitrary orthogonal spherical polygonal grids in a manner that allows for both total energy and potential enstrophy conservation, by combining Hamiltonian methods (work done by Salmon, Gassmann, Dubos, and others) and discrete exterior calculus (Thuburn, Cotter, Dubos, Ringler, Skamarock, Klemp, and others). Detailed results of the schemes applied to standard test cases are deferred to part 2 of this series of papers.

  3. An Optimally Stable and Accurate Second-Order SSP Runge-Kutta IMEX Scheme for Atmospheric Applications

    NASA Astrophysics Data System (ADS)

    Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin

    2018-01-01

    The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.

  4. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    NASA Astrophysics Data System (ADS)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  5. A Simple Qualitative Analysis Scheme for Several Environmentally Important Elements

    ERIC Educational Resources Information Center

    Lambert, Jack L.; Meloan, Clifton E.

    1977-01-01

    Describes a scheme that uses precipitation, gas evolution, complex ion formation, and flame tests to analyze for the following ions: Hg(I), Hg(II), Sb(III), Cr(III), Pb(II), Sr(II), Cu(II), Cd(II), As(III), chloride, nitrate, and sulfate. (MLH)

  6. Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bajaj, Ruchika; Bedi, Punam; Pal, S. K.

    Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.

  7. Coded excitation for infrared non-destructive testing of carbon fiber reinforced plastics.

    PubMed

    Mulaveesala, Ravibabu; Venkata Ghali, Subbarao

    2011-05-01

    This paper proposes a Barker coded excitation for defect detection using infrared non-destructive testing. Capability of the proposed excitation scheme is highlighted with recently introduced correlation based post processing approach and compared with the existing phase based analysis by taking the signal to noise ratio into consideration. Applicability of the proposed scheme has been experimentally validated on a carbon fiber reinforced plastic specimen containing flat bottom holes located at different depths.

  8. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  9. A Nested Modeling Scheme for High-resolution Simulation of the Aquitard Compaction in a Regional Groundwater Extraction Field

    NASA Astrophysics Data System (ADS)

    Aichi, M.; Tokunaga, T.

    2006-12-01

    In the fields that experienced both significant drawdown/land subsidence and the recovery of groundwater potential, temporal change of the effective stress in the clayey layers is not simple. Conducting consolidation tests of core samples is a straightforward approach to know the pre-consolidation stress. However, especially in the urban area, the cost of boring and the limitation of sites for boring make it difficult to carry out enough number of tests. Numerical simulation to reproduce stress history can contribute to selecting boring sites and to complement the results of the laboratory tests. To trace the effective stress profile in the clayey layers by numerical simulation, discretization in the clayey layers should be fine. At the same time, the size of the modeled domain should be large enough to calculate the effect of regional groundwater extraction. Here, we developed a new scheme to reduce memory consumption based on domain decomposition technique. A finite element model of coupled groundwater flow and land subsidence is used for the local model, and a finite difference groundwater flow model is used for the regional model. The local model is discretized to fine mesh in the clayey layers to reproduce the temporal change of pore pressure in the layers while the regional model is discretized to relatively coarse mesh to reproduce the effect of the regional groundwater extraction on the groundwater flow. We have tested this scheme by comparing the results obtained from this scheme with those from the finely gridded model for the entire calculation domain. The difference between the results of these models was small enough and our new scheme can be used for the practical problem.

  10. Short-Term Effects of Different Loading Schemes in Fitness-Related Resistance Training.

    PubMed

    Eifler, Christoph

    2016-07-01

    Eifler, C. Short-term effects of different loading schemes in fitness-related resistance training. J Strength Cond Res 30(7): 1880-1889, 2016-The purpose of this investigation was to analyze the short-term effects of different loading schemes in fitness-related resistance training and to identify the most effective loading method for advanced recreational athletes. The investigation was designed as a longitudinal field-test study. Two hundred healthy mature subjects with at least 12 months' experience in resistance training were randomized in 4 samples of 50 subjects each. Gender distribution was homogenous in all samples. Training effects were quantified by 10 repetition maximum (10RM) and 1 repetition maximum (1RM) testing (pre-post-test design). Over a period of 6 weeks, a standardized resistance training protocol with 3 training sessions per week was realized. Testing and training included 8 resistance training exercises in a standardized order. The following loading schemes were randomly matched to each sample: constant load (CL) with constant volume of repetitions, increasing load (IL) with decreasing volume of repetitions, decreasing load (DL) with increasing volume of repetitions, daily changing load (DCL), and volume of repetitions. For all loading schemes, significant strength gains (p < 0.001) could be noted for all resistance training exercises and both dependent variables (10RM, 1RM). In all cases, DCL obtained significantly higher strength gains (p < 0.001) than CL, IL, and DL. There were no significant differences in strength gains between CL, IL, and DL. The present data indicate that resistance training following DCL is more effective for advanced recreational athletes than CL, IL, or DL. Considering that DCL is widely unknown in fitness-related resistance training, the present data indicate, there is potential for improving resistance training in commercial fitness clubs.

  11. Verlet scheme non-conservativeness for simulation of spherical particles collisional dynamics and method of its compensation

    NASA Astrophysics Data System (ADS)

    Savin, Andrei V.; Smirnov, Petr G.

    2018-05-01

    Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.

  12. The Impact of Microphysics on Intensity and Structure of Hurricanes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Shi, Jainn; Lang, Steve; Peters-Lidard, Christa

    2006-01-01

    During the past decade, both research and operational numerical weather prediction models, e.g. Weather Research and Forecast (WRF) model, have started using more complex microphysical schemes originally developed for high-resolution cloud resolving models (CRMs) with a 1-2 km or less horizontal resolutions. WFW is a next-generation mesoscale forecast model and assimilation system that has incorporated modern software framework, advanced dynamics, numeric and data assimilation techniques, a multiple moveable nesting capability, and improved physical packages. WFW model can be used for a wide range of applications, from idealized research to operational forecasting, with an emphasis on horizontal grid sizes in the range of 1-10 km. The current WRF includes several different microphysics options such as Lin et al. (1983), WSM 6-class and Thompson microphysics schemes. We have recently implemented three sophisticated cloud microphysics schemes into WRF. The cloud microphysics schemes have been extensively tested and applied for different mesoscale systems in different geographical locations. The performances of these schemes have been compared to those from other WRF microphysics options. We are performing sensitivity tests in using WW to examine the impact of six different cloud microphysical schemes on hurricane track, intensity and rainfall forecast. We are also performing the inline tracer calculation to comprehend the physical processes @e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes.

  13. A two-step hierarchical hypothesis set testing framework, with applications to gene expression data on ordered categories

    PubMed Central

    2014-01-01

    Background In complex large-scale experiments, in addition to simultaneously considering a large number of features, multiple hypotheses are often being tested for each feature. This leads to a problem of multi-dimensional multiple testing. For example, in gene expression studies over ordered categories (such as time-course or dose-response experiments), interest is often in testing differential expression across several categories for each gene. In this paper, we consider a framework for testing multiple sets of hypothesis, which can be applied to a wide range of problems. Results We adopt the concept of the overall false discovery rate (OFDR) for controlling false discoveries on the hypothesis set level. Based on an existing procedure for identifying differentially expressed gene sets, we discuss a general two-step hierarchical hypothesis set testing procedure, which controls the overall false discovery rate under independence across hypothesis sets. In addition, we discuss the concept of the mixed-directional false discovery rate (mdFDR), and extend the general procedure to enable directional decisions for two-sided alternatives. We applied the framework to the case of microarray time-course/dose-response experiments, and proposed three procedures for testing differential expression and making multiple directional decisions for each gene. Simulation studies confirm the control of the OFDR and mdFDR by the proposed procedures under independence and positive correlations across genes. Simulation results also show that two of our new procedures achieve higher power than previous methods. Finally, the proposed methodology is applied to a microarray dose-response study, to identify 17 β-estradiol sensitive genes in breast cancer cells that are induced at low concentrations. Conclusions The framework we discuss provides a platform for multiple testing procedures covering situations involving two (or potentially more) sources of multiplicity. The framework is easy to use and adaptable to various practical settings that frequently occur in large-scale experiments. Procedures generated from the framework are shown to maintain control of the OFDR and mdFDR, quantities that are especially relevant in the case of multiple hypothesis set testing. The procedures work well in both simulations and real datasets, and are shown to have better power than existing methods. PMID:24731138

  14. Charge Management in LISA Pathfinder: The Continuous Discharging Experiment

    NASA Astrophysics Data System (ADS)

    Ewing, Becca Elizabeth

    2018-01-01

    Test mass charging is a significant source of excess force and force noise in LISA Pathfinder (LPF). The planned design scheme for mitigation of charge induced force noise in LISA is a continuous discharge by UV light illumination. We report on analysis of a charge management experiment on-board LPF conducted during December 2016. We discuss the measurement of test mass charging noise with and without continuous UV illumination, in addition to the dynamic response in the continuous discharge scheme. Results of the continuous discharge system will be discussed for their application to operating LISA with lower test mass charge.

  15. Alcohol dependence and opiate dependence: lack of relationship in mice.

    PubMed

    Goldstein, A; Judson, B A

    1971-04-16

    According to a recently proposed hypothesis, physical dependence upon alcohol is due to the formation of an endogenous opiate. We tested the hypothesis by determining whether or not ethanol-dependent mice would show typical opiate-dependent behavior (withdrawal jumping syndrome) when challenged with the opiate antagonist naloxone. Our results do not support the hypothesis.

  16. On the Flexibility of Social Source Memory: A Test of the Emotional Incongruity Hypothesis

    ERIC Educational Resources Information Center

    Bell, Raoul; Buchner, Axel; Kroneisen, Meike; Giang, Trang

    2012-01-01

    A popular hypothesis in evolutionary psychology posits that reciprocal altruism is supported by a cognitive module that helps cooperative individuals to detect and remember cheaters. Consistent with this hypothesis, a source memory advantage for faces of cheaters (better memory for the cheating context in which these faces were encountered) was…

  17. The Interviewer as Hypothesis Tester: The Effects of Impressions of an Applicant on Interviewer Questioning Strategy.

    ERIC Educational Resources Information Center

    Sackett, Paul R.

    1982-01-01

    Recent findings suggest individuals seek evidence to confirm initial hypotheses about other people, and that seeking confirmatory evidence makes it likely that a hypothesis will be confirmed. Examined the generalizability of these findings to the employment interview. Consistent use of confirmatory hypothesis testing strategies was not found.…

  18. Generation of Interpersonal Stressful Events: The Role of Poor Social Skills and Early Physical Maturation in Young Adolescents--The TRAILS Study

    ERIC Educational Resources Information Center

    Bakker, Martin P.; Ormel, Johan; Lindenberg, Siegwart; Verhulst, Frank C.; Oldehinkel, Albertine J.

    2011-01-01

    This study developed two specifications of the social skills deficit stress generation hypothesis: the "gender-incongruence" hypothesis to predict peer victimization and the "need for autonomy" hypothesis to predict conflict with authorities. These hypotheses were tested in a prospective large population cohort of 2,064 Dutch…

  19. The EpiOcular™ Eye Irritation Test is the Method of Choice for the In Vitro Eye Irritation Testing of Agrochemical Formulations: Correlation Analysis of EpiOcular Eye Irritation Test and BCOP Test Data According to the UN GHS, US EPA and Brazil ANVISA Classification Schemes.

    PubMed

    Kolle, Susanne N; Rey Moreno, Maria Cecilia; Mayer, Winfried; van Cott, Andrew; van Ravenzwaay, Bennard; Landsiedel, Robert

    2015-07-01

    The Bovine Corneal Opacity and Permeability (BCOP) test is commonly used for the identification of severe ocular irritants (GHS Category 1), but it is not recommended for the identification of ocular irritants (GHS Category 2). The incorporation of human reconstructed tissue model-based tests into a tiered test strategy to identify ocular non-irritants and replace the Draize rabbit eye irritation test has been suggested (OECD TG 405). The value of the EpiOcular™ Eye Irritation Test (EIT) for the prediction of ocular non-irritants (GHS No Category) has been demonstrated, and an OECD Test Guideline (TG) was drafted in 2014. The purpose of this study was to evaluate whether the BCOP test, in conjunction with corneal histopathology (as suggested for the evaluation of the depth of the injury( and/or the EpiOcular-EIT, could be used to predict the eye irritation potential of agrochemical formulations according to the UN GHS, US EPA and Brazil ANVISA classification schemes. We have assessed opacity, permeability and histopathology in the BCOP assay, and relative tissue viability in the EpiOcular-EIT, for 97 agrochemical formulations with available in vivo eye irritation data. By using the OECD TG 437 protocol for liquids, the BCOP test did not result in sufficient correct predictions of severe ocular irritants for any of the three classification schemes. The lack of sensitivity could be improved somewhat by the inclusion of corneal histopathology, but the relative viability in the EpiOcular-EIT clearly outperformed the BCOP test for all three classification schemes. The predictive capacity of the EpiOcular-EIT for ocular non-irritants (UN GHS No Category) for the 97 agrochemical formulations tested (91% sensitivity, 72% specificity and 82% accuracy for UN GHS classification) was comparable to that obtained in the formal validation exercise underlying the OECD draft TG. We therefore conclude that the EpiOcular-EIT is currently the best in vitro method for the prediction of the eye irritation potential of liquid agrochemical formulations. 2015 FRAME.

  20. An Energy Decaying Scheme for Nonlinear Dynamics of Shells

    NASA Technical Reports Server (NTRS)

    Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.

Top