Sample records for yield statistically reliable

  1. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  2. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  3. Making statistical inferences about software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1988-01-01

    Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.

  4. Statistical emulators of maize, rice, soybean and wheat yields from global gridded crop models

    DOE PAGES

    Blanc, Élodie

    2017-01-26

    This study provides statistical emulators of crop yields based on global gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project Fast Track project. The ensemble of simulations is used to build a panel of annual crop yields from five crop models and corresponding monthly summer weather variables for over a century at the grid cell level globally. This dataset is then used to estimate, for each crop and gridded crop model, the statistical relationship between yields, temperature, precipitation and carbon dioxide. This study considers a new functional form to better capture the non-linear response of yields to weather,more » especially for extreme temperature and precipitation events, and now accounts for the effect of soil type. In- and out-of-sample validations show that the statistical emulators are able to replicate spatial patterns of yields crop levels and changes overtime projected by crop models reasonably well, although the accuracy of the emulators varies by model and by region. This study therefore provides a reliable and accessible alternative to global gridded crop yield models. By emulating crop yields for several models using parsimonious equations, the tools provide a computationally efficient method to account for uncertainty in climate change impact assessments.« less

  5. Statistical emulators of maize, rice, soybean and wheat yields from global gridded crop models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanc, Élodie

    This study provides statistical emulators of crop yields based on global gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project Fast Track project. The ensemble of simulations is used to build a panel of annual crop yields from five crop models and corresponding monthly summer weather variables for over a century at the grid cell level globally. This dataset is then used to estimate, for each crop and gridded crop model, the statistical relationship between yields, temperature, precipitation and carbon dioxide. This study considers a new functional form to better capture the non-linear response of yields to weather,more » especially for extreme temperature and precipitation events, and now accounts for the effect of soil type. In- and out-of-sample validations show that the statistical emulators are able to replicate spatial patterns of yields crop levels and changes overtime projected by crop models reasonably well, although the accuracy of the emulators varies by model and by region. This study therefore provides a reliable and accessible alternative to global gridded crop yield models. By emulating crop yields for several models using parsimonious equations, the tools provide a computationally efficient method to account for uncertainty in climate change impact assessments.« less

  6. Interrater reliability: the kappa statistic.

    PubMed

    McHugh, Mary L

    2012-01-01

    The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen's kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen's suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.

  7. Strategy for continuous improvement in IC manufacturability, yield, and reliability

    NASA Astrophysics Data System (ADS)

    Dreier, Dean J.; Berry, Mark; Schani, Phil; Phillips, Michael; Steinberg, Joe; DePinto, Gary

    1993-01-01

    Continual improvements in yield, reliability and manufacturability measure a fab and ultimately result in Total Customer Satisfaction. A new organizational and technical methodology for continuous defect reduction has been established in a formal feedback loop, which relies on yield and reliability, failed bit map analysis, analytical tools, inline monitoring, cross functional teams and a defect engineering group. The strategy requires the fastest detection, identification and implementation of possible corrective actions. Feedback cycle time is minimized at all points to improve yield and reliability and reduce costs, essential for competitiveness in the memory business. Payoff was a 9.4X reduction in defectivity and a 6.2X improvement in reliability of 256 K fast SRAMs over 20 months.

  8. Statistical methodology: II. Reliability and validity assessment in study design, Part B.

    PubMed

    Karras, D J

    1997-02-01

    Validity measures the correspondence between a test and other purported measures of the same or similar qualities. When a reference standard exists, a criterion-based validity coefficient can be calculated. If no such standard is available, the concepts of content and construct validity may be used, but quantitative analysis may not be possible. The Pearson and Spearman tests of correlation are often used to assess the correspondence between tests, but do not account for measurement biases and may yield misleading results. Techniques that measure interest differences may be more meaningful in validity assessment, and the kappa statistic is useful for analyzing categorical variables. Questionnaires often can be designed to allow quantitative assessment of reliability and validity, although this may be difficult. Inclusion of homogeneous questions is necessary to assess reliability. Analysis is enhanced by using Likert scales or similar techniques that yield ordinal data. Validity assessment of questionnaires requires careful definition of the scope of the test and comparison with previously validated tools.

  9. Reliability of reservoir firm yield determined from the historical drought of record

    USGS Publications Warehouse

    Archfield, S.A.; Vogel, R.M.

    2005-01-01

    The firm yield of a reservoir is typically defined as the maximum yield that could have been delivered without failure during the historical drought of record. In the future, reservoirs will experience droughts that are either more or less severe than the historical drought of record. The question addressed here is what the reliability of such systems will be when operated at the firm yield. To address this question, we examine the reliability of 25 hypothetical reservoirs sited across five locations in the central and western United States. These locations provided a continuous 756-month streamflow record spanning the same time interval. The firm yield of each reservoir was estimated from the historical drought of record at each location. To determine the steady-state monthly reliability of each firm-yield estimate, 12,000-month synthetic records were generated using the moving-blocks bootstrap method. Bootstrapping was repeated 100 times for each reservoir to obtain an average steady-state monthly reliability R, the number of months the reservoir did not fail divided by the total months. Values of R were greater than 0.99 for 60 percent of the study reservoirs; the other 40 percent ranged from 0.95 to 0.98. Estimates of R were highly correlated with both the level of development (ratio of firm yield to average streamflow) and average lag-1 monthly autocorrelation. Together these two predictors explained 92 percent of the variability in R, with the level of development alone explaining 85 percent of the variability. Copyright ASCE 2005.

  10. Statistical Evaluations of Variations in Dairy Cows’ Milk Yields as a Precursor of Earthquakes

    PubMed Central

    Yamauchi, Hiroyuki; Hayakawa, Masashi; Asano, Tomokazu; Ohtani, Nobuyo; Ohta, Mitsuaki

    2017-01-01

    Simple Summary There are many reports of abnormal changes occurring in various natural systems prior to earthquakes. Unusual animal behavior is one of these abnormalities; however, there are few objective indicators and to date, reliability has remained uncertain. We found that milk yields of dairy cows decreased prior to an earthquake in our previous case study. In this study, we examined the reliability of decreases in milk yields as a precursor for earthquakes using long-term observation data. In the results, milk yields decreased approximately three weeks before earthquakes. We have come to the conclusion that dairy cow milk yields have applicability as an objectively observable unusual animal behavior prior to earthquakes, and dairy cows respond to some physical or chemical precursors of earthquakes. Abstract Previous studies have provided quantitative data regarding unusual animal behavior prior to earthquakes; however, few studies include long-term, observational data. Our previous study revealed that the milk yields of dairy cows decreased prior to an extremely large earthquake. To clarify whether the milk yields decrease prior to earthquakes, we examined the relationship between earthquakes of various magnitudes and daily milk yields. The observation period was one year. In the results, cross-correlation analyses revealed a significant negative correlation between earthquake occurrence and milk yields approximately three weeks beforehand. Approximately a week and a half beforehand, a positive correlation was revealed, and the correlation gradually receded to zero as the day of the earthquake approached. Future studies that use data from a longer observation period are needed because this study only considered ten earthquakes and therefore does not have strong statistical power. Additionally, we compared the milk yields with the subionospheric very low frequency/low frequency (VLF/LF) propagation data indicating ionospheric perturbations. The results showed

  11. Comparison of statistical models for analyzing wheat yield time series.

    PubMed

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.

  12. Comparison of Statistical Models for Analyzing Wheat Yield Time Series

    PubMed Central

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha−1 year−1 in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale. PMID:24205280

  13. Statistical Tests of Reliability of NDE

    NASA Technical Reports Server (NTRS)

    Baaklini, George Y.; Klima, Stanley J.; Roth, Don J.; Kiser, James D.

    1987-01-01

    Capabilities of advanced material-testing techniques analyzed. Collection of four reports illustrates statistical method for characterizing flaw-detecting capabilities of sophisticated nondestructive evaluation (NDE). Method used to determine reliability of several state-of-the-art NDE techniques for detecting failure-causing flaws in advanced ceramic materials considered for use in automobiles, airplanes, and space vehicles.

  14. Negative impacts of climate change on cereal yields: statistical evidence from France

    NASA Astrophysics Data System (ADS)

    Gammans, Matthew; Mérel, Pierre; Ortiz-Bobea, Ariel

    2017-05-01

    In several world regions, climate change is predicted to negatively affect crop productivity. The recent statistical yield literature emphasizes the importance of flexibly accounting for the distribution of growing-season temperature to better represent the effects of warming on crop yields. We estimate a flexible statistical yield model using a long panel from France to investigate the impacts of temperature and precipitation changes on wheat and barley yields. Winter varieties appear sensitive to extreme cold after planting. All yields respond negatively to an increase in spring-summer temperatures and are a decreasing function of precipitation about historical precipitation levels. Crop yields are predicted to be negatively affected by climate change under a wide range of climate models and emissions scenarios. Under warming scenario RCP8.5 and holding growing areas and technology constant, our model ensemble predicts a 21.0% decline in winter wheat yield, a 17.3% decline in winter barley yield, and a 33.6% decline in spring barley yield by the end of the century. Uncertainty from climate projections dominates uncertainty from the statistical model. Finally, our model predicts that continuing technology trends would counterbalance most of the effects of climate change.

  15. Statistics-related and reliability-physics-related failure processes in electronics devices and products

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    2014-05-01

    The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.

  16. Notes on numerical reliability of several statistical analysis programs

    USGS Publications Warehouse

    Landwehr, J.M.; Tasker, Gary D.

    1999-01-01

    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  17. Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects

    NASA Technical Reports Server (NTRS)

    Makowski, David; Asseng, Senthold; Ewert, Frank; Bassu, Simona; Durand, Jean-Louis; Martre, Pierre; Adam, Myriam; Aggarwal, Pramod K.; Angulo, Carlos; Baron, Chritian; hide

    2015-01-01

    Many studies have been carried out during the last decade to study the effect of climate change on crop yields and other key crop characteristics. In these studies, one or several crop models were used to simulate crop growth and development for different climate scenarios that correspond to different projections of atmospheric CO2 concentration, temperature, and rainfall changes (Semenov et al., 1996; Tubiello and Ewert, 2002; White et al., 2011). The Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013) builds on these studies with the goal of using an ensemble of multiple crop models in order to assess effects of climate change scenarios for several crops in contrasting environments. These studies generate large datasets, including thousands of simulated crop yield data. They include series of yield values obtained by combining several crop models with different climate scenarios that are defined by several climatic variables (temperature, CO2, rainfall, etc.). Such datasets potentially provide useful information on the possible effects of different climate change scenarios on crop yields. However, it is sometimes difficult to analyze these datasets and to summarize them in a useful way due to their structural complexity; simulated yield data can differ among contrasting climate scenarios, sites, and crop models. Another issue is that it is not straightforward to extrapolate the results obtained for the scenarios to alternative climate change scenarios not initially included in the simulation protocols. Additional dynamic crop model simulations for new climate change scenarios are an option but this approach is costly, especially when a large number of crop models are used to generate the simulated data, as in AgMIP. Statistical models have been used to analyze responses of measured yield data to climate variables in past studies (Lobell et al., 2011), but the use of a statistical model to analyze yields simulated by complex

  18. Impact of Rating Scale Categories on Reliability and Fit Statistics of the Malay Spiritual Well-Being Scale using Rasch Analysis.

    PubMed

    Daher, Aqil Mohammad; Ahmad, Syed Hassan; Winn, Than; Selamat, Mohd Ikhsan

    2015-01-01

    Few studies have employed the item response theory in examining reliability. We conducted this study to examine the effect of Rating Scale Categories (RSCs) on the reliability and fit statistics of the Malay Spiritual Well-Being Scale, employing the Rasch model. The Malay Spiritual Well-Being Scale (SWBS) with the original six; three and four newly structured RSCs was distributed randomly among three different samples of 50 participants each. The mean age of respondents in the three samples ranged between 36 and 39 years old. The majority was female in all samples, and Islam was the most prevalent religion among the respondents. The predominating race was Malay, followed by Chinese and Indian. The original six RSCs indicated better targeting of 0.99 and smallest model error of 0.24. The Infit Mnsq (mean square) and Zstd (Z standard) of the six RSCs were "1.1"and "-0.1"respectively. The six RSCs achieved the highest person and item reliabilities of 0.86 and 0.85 respectively. These reliabilities yielded the highest person (2.46) and item (2.38) separation indices compared to other the RSCs. The person and item reliability and, to a lesser extent, the fit statistics, were better with the six RSCs compared to the four and three RSCs.

  19. Statistics-based model for prediction of chemical biosynthesis yield from Saccharomyces cerevisiae

    PubMed Central

    2011-01-01

    Background The robustness of Saccharomyces cerevisiae in facilitating industrial-scale production of ethanol extends its utilization as a platform to synthesize other metabolites. Metabolic engineering strategies, typically via pathway overexpression and deletion, continue to play a key role for optimizing the conversion efficiency of substrates into the desired products. However, chemical production titer or yield remains difficult to predict based on reaction stoichiometry and mass balance. We sampled a large space of data of chemical production from S. cerevisiae, and developed a statistics-based model to calculate production yield using input variables that represent the number of enzymatic steps in the key biosynthetic pathway of interest, metabolic modifications, cultivation modes, nutrition and oxygen availability. Results Based on the production data of about 40 chemicals produced from S. cerevisiae, metabolic engineering methods, nutrient supplementation, and fermentation conditions described therein, we generated mathematical models with numerical and categorical variables to predict production yield. Statistically, the models showed that: 1. Chemical production from central metabolic precursors decreased exponentially with increasing number of enzymatic steps for biosynthesis (>30% loss of yield per enzymatic step, P-value = 0); 2. Categorical variables of gene overexpression and knockout improved product yield by 2~4 folds (P-value < 0.1); 3. Addition of notable amount of intermediate precursors or nutrients improved product yield by over five folds (P-value < 0.05); 4. Performing the cultivation in a well-controlled bioreactor enhanced the yield of product by three folds (P-value < 0.05); 5. Contribution of oxygen to product yield was not statistically significant. Yield calculations for various chemicals using the linear model were in fairly good agreement with the experimental values. The model generally underestimated the ethanol production as

  20. What's holding us back? Raising the alfalfa yield bar

    USDA-ARS?s Scientific Manuscript database

    Measuring yield of commodity crops is easy – weight and moisture content are determined on delivery. Consequently, reports of production or yield for grain crops can be made reliably to the agencies that track crop production, such as the USDA-National Agricultural Statistics Service (NASS). The s...

  1. Local yield stress statistics in model amorphous solids

    NASA Astrophysics Data System (ADS)

    Barbot, Armand; Lerbinger, Matthias; Hernandez-Garcia, Anier; García-García, Reinaldo; Falk, Michael L.; Vandembroucq, Damien; Patinet, Sylvain

    2018-03-01

    We develop and extend a method presented by Patinet, Vandembroucq, and Falk [Phys. Rev. Lett. 117, 045501 (2016), 10.1103/PhysRevLett.117.045501] to compute the local yield stresses at the atomic scale in model two-dimensional Lennard-Jones glasses produced via differing quench protocols. This technique allows us to sample the plastic rearrangements in a nonperturbative manner for different loading directions on a well-controlled length scale. Plastic activity upon shearing correlates strongly with the locations of low yield stresses in the quenched states. This correlation is higher in more structurally relaxed systems. The distribution of local yield stresses is also shown to strongly depend on the quench protocol: the more relaxed the glass, the higher the local plastic thresholds. Analysis of the magnitude of local plastic relaxations reveals that stress drops follow exponential distributions, justifying the hypothesis of an average characteristic amplitude often conjectured in mesoscopic or continuum models. The amplitude of the local plastic rearrangements increases on average with the yield stress, regardless of the system preparation. The local yield stress varies with the shear orientation tested and strongly correlates with the plastic rearrangement locations when the system is sheared correspondingly. It is thus argued that plastic rearrangements are the consequence of shear transformation zones encoded in the glass structure that possess weak slip planes along different orientations. Finally, we justify the length scale employed in this work and extract the yield threshold statistics as a function of the size of the probing zones. This method makes it possible to derive physically grounded models of plasticity for amorphous materials by directly revealing the relevant details of the shear transformation zones that mediate this process.

  2. Identification of reliable gridded reference data for statistical downscaling methods in Alberta

    NASA Astrophysics Data System (ADS)

    Eum, H. I.; Gupta, A.

    2017-12-01

    Climate models provide essential information to assess impacts of climate change at regional and global scales. However, statistical downscaling methods have been applied to prepare climate model data for various applications such as hydrologic and ecologic modelling at a watershed scale. As the reliability and (spatial and temporal) resolution of statistically downscaled climate data mainly depend on a reference data, identifying the most reliable reference data is crucial for statistical downscaling. A growing number of gridded climate products are available for key climate variables which are main input data to regional modelling systems. However, inconsistencies in these climate products, for example, different combinations of climate variables, varying data domains and data lengths and data accuracy varying with physiographic characteristics of the landscape, have caused significant challenges in selecting the most suitable reference climate data for various environmental studies and modelling. Employing various observation-based daily gridded climate products available in public domain, i.e. thin plate spline regression products (ANUSPLIN and TPS), inverse distance method (Alberta Townships), and numerical climate model (North American Regional Reanalysis) and an optimum interpolation technique (Canadian Precipitation Analysis), this study evaluates the accuracy of the climate products at each grid point by comparing with the Adjusted and Homogenized Canadian Climate Data (AHCCD) observations for precipitation, minimum and maximum temperature over the province of Alberta. Based on the performance of climate products at AHCCD stations, we ranked the reliability of these publically available climate products corresponding to the elevations of stations discretized into several classes. According to the rank of climate products for each elevation class, we identified the most reliable climate products based on the elevation of target points. A web-based system

  3. The Estimation of the IRT Reliability Coefficient and Its Lower and Upper Bounds, with Comparisons to CTT Reliability Statistics

    ERIC Educational Resources Information Center

    Kim, Seonghoon; Feldt, Leonard S.

    2010-01-01

    The primary purpose of this study is to investigate the mathematical characteristics of the test reliability coefficient rho[subscript XX'] as a function of item response theory (IRT) parameters and present the lower and upper bounds of the coefficient. Another purpose is to examine relative performances of the IRT reliability statistics and two…

  4. Surveys Assessing Students' Attitudes toward Statistics: A Systematic Review of Validity and Reliability

    ERIC Educational Resources Information Center

    Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.

    2012-01-01

    Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…

  5. Statistical modeling of SRAM yield performance and circuit variability

    NASA Astrophysics Data System (ADS)

    Cheng, Qi; Chen, Yijian

    2015-03-01

    In this paper, we develop statistical models to investigate SRAM yield performance and circuit variability in the presence of self-aligned multiple patterning (SAMP) process. It is assumed that SRAM fins are fabricated by a positivetone (spacer is line) self-aligned sextuple patterning (SASP) process which accommodates two types of spacers, while gates are fabricated by a more pitch-relaxed self-aligned quadruple patterning (SAQP) process which only allows one type of spacer. A number of possible inverter and SRAM structures are identified and the related circuit multi-modality is studied using the developed failure-probability and yield models. It is shown that SRAM circuit yield is significantly impacted by the multi-modality of fins' spatial variations in a SRAM cell. The sensitivity of 6-transistor SRAM read/write failure probability to SASP process variations is calculated and the specific circuit type with the highest probability to fail in the reading/writing operation is identified. Our study suggests that the 6-transistor SRAM configuration may not be scalable to 7-nm half pitch and more robust SRAM circuit design needs to be researched.

  6. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination

    NASA Astrophysics Data System (ADS)

    Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.

    2005-05-01

    A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.

  7. Reliable probabilities through statistical post-processing of ensemble predictions

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2013-04-01

    We develop post-processing or calibration approaches based on linear regression that make ensemble forecasts more reliable. We enforce climatological reliability in the sense that the total variability of the prediction is equal to the variability of the observations. Second, we impose ensemble reliability such that the spread around the ensemble mean of the observation coincides with the one of the ensemble members. In general the attractors of the model and reality are inhomogeneous. Therefore ensemble spread displays a variability not taken into account in standard post-processing methods. We overcome this by weighting the ensemble by a variable error. The approaches are tested in the context of the Lorenz 96 model (Lorenz 1996). The forecasts become more reliable at short lead times as reflected by a flatter rank histogram. Our best method turns out to be superior to well-established methods like EVMOS (Van Schaeybroeck and Vannitsem, 2011) and Nonhomogeneous Gaussian Regression (Gneiting et al., 2005). References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Lorenz, E. N., 1996: Predictability - a problem partly solved. Proceedings, Seminar on Predictability ECMWF. 1, 1-18. [3] Van Schaeybroeck, B., and S. Vannitsem, 2011: Post-processing through linear regression, Nonlin. Processes Geophys., 18, 147.

  8. Implementing and testing theoretical fission fragment yields in a Hauser-Feshbach statistical decay framework

    NASA Astrophysics Data System (ADS)

    Jaffke, Patrick; Möller, Peter; Stetcu, Ionel; Talou, Patrick; Schmitt, Christelle

    2018-03-01

    We implement fission fragment yields, calculated using Brownian shape-motion on a macroscopic-microscopic potential energy surface in six dimensions, into the Hauser-Feshbach statistical decay code CGMF. This combination allows us to test the impact of utilizing theoretically-calculated fission fragment yields on the subsequent prompt neutron and γ-ray emission. We draw connections between the fragment yields and the total kinetic energy TKE of the fission fragments and demonstrate that the use of calculated yields can introduce a difference in the 〈TKE〉 and, thus, the prompt neutron multiplicity v, as compared with experimental fragment yields. We deduce the uncertainty on the 〈TKE〉 and v from this procedure and identify possible applications.

  9. A systematic review of statistical methods used to test for reliability of medical instruments measuring continuous variables.

    PubMed

    Zaki, Rafdzah; Bulgiba, Awang; Nordin, Noorhaire; Azina Ismail, Noor

    2013-06-01

    Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. The Intra-class Correlation Coefficient (ICC) is the most popular method with 25 (60%) studies having used this method followed by the comparing means (8 or 19%). Out of 25 studies using the ICC, only 7 (28%) reported the confidence intervals and types of ICC used. Most studies (71%) also tested the agreement of instruments. This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  10. Estimating the impact of mineral aerosols on crop yields in food insecure regions using statistical crop models

    NASA Astrophysics Data System (ADS)

    Hoffman, A.; Forest, C. E.; Kemanian, A.

    2016-12-01

    A significant number of food-insecure nations exist in regions of the world where dust plays a large role in the climate system. While the impacts of common climate variables (e.g. temperature, precipitation, ozone, and carbon dioxide) on crop yields are relatively well understood, the impact of mineral aerosols on yields have not yet been thoroughly investigated. This research aims to develop the data and tools to progress our understanding of mineral aerosol impacts on crop yields. Suspended dust affects crop yields by altering the amount and type of radiation reaching the plant, modifying local temperature and precipitation. While dust events (i.e. dust storms) affect crop yields by depleting the soil of nutrients or by defoliation via particle abrasion. The impact of dust on yields is modeled statistically because we are uncertain which impacts will dominate the response on national and regional scales considered in this study. Multiple linear regression is used in a number of large-scale statistical crop modeling studies to estimate yield responses to various climate variables. In alignment with previous work, we develop linear crop models, but build upon this simple method of regression with machine-learning techniques (e.g. random forests) to identify important statistical predictors and isolate how dust affects yields on the scales of interest. To perform this analysis, we develop a crop-climate dataset for maize, soybean, groundnut, sorghum, rice, and wheat for the regions of West Africa, East Africa, South Africa, and the Sahel. Random forest regression models consistently model historic crop yields better than the linear models. In several instances, the random forest models accurately capture the temperature and precipitation threshold behavior in crops. Additionally, improving agricultural technology has caused a well-documented positive trend that dominates time series of global and regional yields. This trend is often removed before regression with

  11. Building a statistical emulator for prediction of crop yield response to climate change: a global gridded panel data set approach

    NASA Astrophysics Data System (ADS)

    Mistry, Malcolm; De Cian, Enrica; Wing, Ian Sue

    2015-04-01

    There is widespread concern that trends and variability in weather induced by climate change will detrimentally affect global agricultural productivity and food supplies. Reliable quantification of the risks of negative impacts at regional and global scales is a critical research need, which has so far been met by forcing state-of-the-art global gridded crop models with outputs of global climate model (GCM) simulations in exercises such as the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP)-Fastrack. Notwithstanding such progress, it remains challenging to use these simulation-based projections to assess agricultural risk because their gridded fields of crop yields are fundamentally denominated as discrete combinations of warming scenarios, GCMs and crop models, and not as model-specific or model-averaged yield response functions of meteorological shifts, which may have their own independent probability of occurrence. By contrast, the empirical climate economics literature has adeptly represented agricultural responses to meteorological variables as reduced-form statistical response surfaces which identify the crop productivity impacts of additional exposure to different intervals of temperature and precipitation [cf Schlenker and Roberts, 2009]. This raises several important questions: (1) what do the equivalent reduced-form statistical response surfaces look like for crop model outputs, (2) do they exhibit systematic variation over space (e.g., crop suitability zones) or across crop models with different characteristics, (3) how do they compare to estimates based on historical observations, and (4) what are the implications for the characterization of climate risks? We address these questions by estimating statistical yield response functions for four major crops (maize, rice, wheat and soybeans) over the historical period (1971-2004) as well as future climate change scenarios (2005-2099) using ISIMIP-Fastrack data for five GCMs and seven crop models

  12. Statistical modelling of grapevine yield in the Port Wine region under present and future climate conditions

    NASA Astrophysics Data System (ADS)

    Santos, João A.; Malheiro, Aureliano C.; Karremann, Melanie K.; Pinto, Joaquim G.

    2011-03-01

    The impact of projected climate change on wine production was analysed for the Demarcated Region of Douro, Portugal. A statistical grapevine yield model (GYM) was developed using climate parameters as predictors. Statistically significant correlations were identified between annual yield and monthly mean temperatures and monthly precipitation totals during the growing cycle. These atmospheric factors control grapevine yield in the region, with the GYM explaining 50.4% of the total variance in the yield time series in recent decades. Anomalously high March rainfall (during budburst, shoot and inflorescence development) favours yield, as well as anomalously high temperatures and low precipitation amounts in May and June (May: flowering and June: berry development). The GYM was applied to a regional climate model output, which was shown to realistically reproduce the GYM predictors. Finally, using ensemble simulations under the A1B emission scenario, projections for GYM-derived yield in the Douro Region, and for the whole of the twenty-first century, were analysed. A slight upward trend in yield is projected to occur until about 2050, followed by a steep and continuous increase until the end of the twenty-first century, when yield is projected to be about 800 kg/ha above current values. While this estimate is based on meteorological parameters alone, changes due to elevated CO2 may further enhance this effect. In spite of the associated uncertainties, it can be stated that projected climate change may significantly benefit wine yield in the Douro Valley.

  13. Statistical rice yield modeling using blended MODIS-Landsat based crop phenology metrics in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, C. R.; Chen, C. F.; Nguyen, S. T.; Lau, K. V.

    2015-12-01

    Taiwan is a populated island with a majority of residents settled in the western plains where soils are suitable for rice cultivation. Rice is not only the most important commodity, but also plays a critical role for agricultural and food marketing. Information of rice production is thus important for policymakers to devise timely plans for ensuring sustainably socioeconomic development. Because rice fields in Taiwan are generally small and yet crop monitoring requires information of crop phenology associating with the spatiotemporal resolution of satellite data, this study used Landsat-MODIS fusion data for rice yield modeling in Taiwan. We processed the data for the first crop (Feb-Mar to Jun-Jul) and the second (Aug-Sep to Nov-Dec) in 2014 through five main steps: (1) data pre-processing to account for geometric and radiometric errors of Landsat data, (2) Landsat-MODIS data fusion using using the spatial-temporal adaptive reflectance fusion model, (3) construction of the smooth time-series enhanced vegetation index 2 (EVI2), (4) rice yield modeling using EVI2-based crop phenology metrics, and (5) error verification. The fusion results by a comparison bewteen EVI2 derived from the fusion image and that from the reference Landsat image indicated close agreement between the two datasets (R2 > 0.8). We analysed smooth EVI2 curves to extract phenology metrics or phenological variables for establishment of rice yield models. The results indicated that the established yield models significantly explained more than 70% variability in the data (p-value < 0.001). The comparison results between the estimated yields and the government's yield statistics for the first and second crops indicated a close significant relationship between the two datasets (R2 > 0.8), in both cases. The root mean square error (RMSE) and mean absolute error (MAE) used to measure the model accuracy revealed the consistency between the estimated yields and the government's yield statistics. This

  14. Multivariate Statistical Analysis of Cigarette Design Feature Influence on ISO TNCO Yields.

    PubMed

    Agnew-Heard, Kimberly A; Lancaster, Vicki A; Bravo, Roberto; Watson, Clifford; Walters, Matthew J; Holman, Matthew R

    2016-06-20

    The aim of this study is to explore how differences in cigarette physical design parameters influence tar, nicotine, and carbon monoxide (TNCO) yields in mainstream smoke (MSS) using the International Organization of Standardization (ISO) smoking regimen. Standardized smoking methods were used to evaluate 50 U.S. domestic brand cigarettes and a reference cigarette representing a range of TNCO yields in MSS collected from linear smoking machines using a nonintense smoking regimen. Multivariate statistical methods were used to form clusters of cigarettes based on their ISO TNCO yields and then to explore the relationship between the ISO generated TNCO yields and the nine cigarette physical design parameters between and within each cluster simultaneously. The ISO generated TNCO yields in MSS are 1.1-17.0 mg tar/cigarette, 0.1-2.2 mg nicotine/cigarette, and 1.6-17.3 mg CO/cigarette. Cluster analysis divided the 51 cigarettes into five discrete clusters based on their ISO TNCO yields. No one physical parameter dominated across all clusters. Predicting ISO machine generated TNCO yields based on these nine physical design parameters is complex due to the correlation among and between the nine physical design parameters and TNCO yields. From these analyses, it is estimated that approximately 20% of the variability in the ISO generated TNCO yields comes from other parameters (e.g., filter material, filter type, inclusion of expanded or reconstituted tobacco, and tobacco blend composition, along with differences in tobacco leaf origin and stalk positions and added ingredients). A future article will examine the influence of these physical design parameters on TNCO yields under a Canadian Intense (CI) smoking regimen. Together, these papers will provide a more robust picture of the design features that contribute to TNCO exposure across the range of real world smoking patterns.

  15. Statistical analysis of corn yields responding to climate variability at various spatio-temporal resolutions

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Lin, T.

    2017-12-01

    Rain-fed corn production systems are subject to sub-seasonal variations of precipitation and temperature during the growing season. As each growth phase has varied inherent physiological process, plants necessitate different optimal environmental conditions during each phase. However, this temporal heterogeneity towards climate variability alongside the lifecycle of crops is often simplified and fixed as constant responses in large scale statistical modeling analysis. To capture the time-variant growing requirements in large scale statistical analysis, we develop and compare statistical models at various spatial and temporal resolutions to quantify the relationship between corn yield and weather factors for 12 corn belt states from 1981 to 2016. The study compares three spatial resolutions (county, agricultural district, and state scale) and three temporal resolutions (crop growth phase, monthly, and growing season) to characterize the effects of spatial and temporal variability. Our results show that the agricultural district model together with growth phase resolution can explain 52% variations of corn yield caused by temperature and precipitation variability. It provides a practical model structure balancing the overfitting problem in county specific model and weak explanation power in state specific model. In US corn belt, precipitation has positive impact on corn yield in growing season except for vegetative stage while extreme heat attains highest sensitivity from silking to dough phase. The results show the northern counties in corn belt area are less interfered by extreme heat but are more vulnerable to water deficiency.

  16. Improving yield and reliability of FIB modifications using electrical testing

    NASA Astrophysics Data System (ADS)

    Desplats, Romain; Benbrik, Jamel; Benteo, Bruno; Perdu, Philippe

    1998-08-01

    Focused Ion Beam technology has two main areas of application for ICs: modification and preparation for technological analysis. The most solicited area is modification. This involves physically modifying a circuit by cutting lines and creating new ones in order to change the electrical function of the circuit. IC planar technologies have an increasing number of metal interconnections making FIB modifications more complex and decreasing their changes of success. The yield of FIB operations on ICs reflects a downward trend that imposes a greater number of circuits to be modified in order to successfully correct a small number of them. This requires extended duration, which is not compatible with production line turn around times. To respond to this problem, two solutions can be defined: either, reducing the duration of each FIB operation or increasing the success rate of FIB modifications. Since reducing the time depends mainly on FIB operator experience, insuring a higher success rate represents a more crucial aspect as both experienced and novice operators could benefit from this improvement. In order to insure successful modifications, it is necessary to control each step of a FIB operation. To do this, we have developed a new method using in situ electrical testing which has a direct impact on the yield of FIB modifications. We will present this innovative development through a real case study of a CMOS ASIC for high-speed communications. Monitoring the electrical behavior at each step in a FIB operation makes it possible to reduce the number of circuits to be modified and consequently reduces system costs thanks to better yield control. Knowing the internal electrical behavior also gives us indications about the impact on reliability of FIB modified circuits. Finally, this approach can be applied to failure analysis and FIB operations on flip chip circuits.

  17. Evaluation of General Classes of Reliability Estimators Often Used in Statistical Analyses of Quasi-Experimental Designs

    NASA Astrophysics Data System (ADS)

    Saini, K. K.; Sehgal, R. K.; Sethi, B. L.

    2008-10-01

    In this paper major reliability estimators are analyzed and there comparatively result are discussed. There strengths and weaknesses are evaluated in this case study. Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs.

  18. fMRI reliability: influences of task and experimental design.

    PubMed

    Bennett, Craig M; Miller, Michael B

    2013-12-01

    As scientists, it is imperative that we understand not only the power of our research tools to yield results, but also their ability to obtain similar results over time. This study is an investigation into how common decisions made during the design and analysis of a functional magnetic resonance imaging (fMRI) study can influence the reliability of the statistical results. To that end, we gathered back-to-back test-retest fMRI data during an experiment involving multiple cognitive tasks (episodic recognition and two-back working memory) and multiple fMRI experimental designs (block, event-related genetic sequence, and event-related m-sequence). Using these data, we were able to investigate the relative influences of task, design, statistical contrast (task vs. rest, target vs. nontarget), and statistical thresholding (unthresholded, thresholded) on fMRI reliability, as measured by the intraclass correlation (ICC) coefficient. We also utilized data from a second study to investigate test-retest reliability after an extended, six-month interval. We found that all of the factors above were statistically significant, but that they had varying levels of influence on the observed ICC values. We also found that these factors could interact, increasing or decreasing the relative reliability of certain Task × Design combinations. The results suggest that fMRI reliability is a complex construct whose value may be increased or decreased by specific combinations of factors.

  19. Statistical significance test for transition matrices of atmospheric Markov chains

    NASA Technical Reports Server (NTRS)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  20. A Statistical Testing Approach for Quantifying Software Reliability; Application to an Example System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, Tsong-Lun; Varuttamaseni, Athi; Baek, Joo-Seok

    The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities.more » Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).« less

  1. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    DOE PAGES

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less

  2. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    PubMed

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Statistical equivalence and test-retest reliability of delay and probability discounting using real and hypothetical rewards.

    PubMed

    Matusiewicz, Alexis K; Carter, Anne E; Landes, Reid D; Yi, Richard

    2013-11-01

    Delay discounting (DD) and probability discounting (PD) refer to the reduction in the subjective value of outcomes as a function of delay and uncertainty, respectively. Elevated measures of discounting are associated with a variety of maladaptive behaviors, and confidence in the validity of these measures is imperative. The present research examined (1) the statistical equivalence of discounting measures when rewards were hypothetical or real, and (2) their 1-week reliability. While previous research has partially explored these issues using the low threshold of nonsignificant difference, the present study fully addressed this issue using the more-compelling threshold of statistical equivalence. DD and PD measures were collected from 28 healthy adults using real and hypothetical $50 rewards during each of two experimental sessions, one week apart. Analyses using area-under-the-curve measures revealed a general pattern of statistical equivalence, indicating equivalence of real/hypothetical conditions as well as 1-week reliability. Exceptions are identified and discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Statistical Relations for Yield Degradation in Inertial Confinement Fusion

    NASA Astrophysics Data System (ADS)

    Woo, K. M.; Betti, R.; Patel, D.; Gopalaswamy, V.

    2017-10-01

    In inertial confinement fusion (ICF), the yield-over-clean (YOC) is a quantity commonly used to assess the performance of an implosion with respect to the degradation caused by asymmetries. The YOC also determines the Lawson parameter used to identify the onset of ignition and the level of alpha heating in ICF implosions. In this work, we show that the YOC is a unique function of the residual kinetic energy in the compressed shell (with respect to the 1-D case) regardless of the asymmetry spectrum. This result is derived using a simple model of the deceleration phase as well as through an extensive set of 3-D radiation-hydrodynamics simulations using the code DEC3D. The latter has been recently upgraded to include a 3-D spherical moving mesh, the HYPRE solver for 3-D radiation transport and piecewise-parabolic method for robust shock-capturing hydrodynamic simulations. DEC3D is used to build a synthetic single-mode database to study the behavior of yield degradation caused by Rayleigh-Taylor instabilities in the deceleration phase. The relation between YOC and residual kinetic energy is compared with the result in an adiabatic implosion model. The statistical expression of YOC is also applied to the ignition criterion in the presence of multidimensional nonuniformities. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  5. InfoDROUGHT: Technical reliability assessment using crop yield data at the Spanish-national level

    NASA Astrophysics Data System (ADS)

    Contreras, Sergio; Garcia-León, David; Hunink, Johannes E.

    2017-04-01

    Drought monitoring (DM) is a key component of risk-centered drought preparedness plans and drought policies. InfoDROUGHT (www.infosequia.es) is a a site- and user-tailored and fully-integrated DM system which combines functionalities for: a) the operational satellite-based weekly-1km tracking of severity and spatial extent of drought impacts, b) the interactive and faster query and delivery of drought information through a web-mapping service. InfoDROUGHT has a flexible and modular structure. The calibration (threshold definitions) and validation of the system is performed by combining expert knowledge and auxiliary impact assessments and datasets. Different technical solutions (basic or advanced versions) or deployment options (open-standard or restricted-authenticated) can be purchased by end-users and customers according to their needs. In this analysis, the technical reliability of InfoDROUGHT and its performance for detecting drought impacts on agriculture has been evaluated in the 2003-2014 period by exploring and quantifying the relationships among the drought severity indices reported by InfoDROUGHT and the annual yield anomalies observed for different rainfed crops (maize, wheat, barley) at Spain. We hypothesize a positive relationship between the crop anomalies and the drought severity level detected by InfoDROUGHT. Annual yield anomalies were computed at the province administrative level as the difference between the annual yield reported by the Spanish Annual Survey of Crop Acreages and Yields (ESYRCE database) and the mean annual yield estimated during the study period. Yield anomalies were finally compared against drought greenness-based and thermal-based drought indices (VCI and TCI, respectively) to check the coherence of the outputs and the hypothesis stated. InfoDROUGHT has been partly funded by the Spanish Ministry of Economy and Competiveness through a Torres-Quevedo grant, and by the H2020-EU project "Bridging the Gap for Innovations in

  6. Mathematical and statistical analysis of the effect of boron on yield parameters of wheat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawashdeh, Hamzeh; Sala, Florin; Boldea, Marius

    The main objective of this research is to investigate the effect of foliar applications of boron at different growth stages on yield and yield parameters of wheat. The contribution of boron in achieving yield parameters is described by second degree polynomial equations, with high statistical confidence (p<0.01; F theoretical < F calculated, according to ANOVA test, for Alfa = 0.05). Regression analysis, based on R{sup 2} values obtained, made it possible to evaluate the particular contribution of boron to the realization of yield parameters. This was lower for spike length (R{sup 2} = 0.812), thousand seeds weight (R{sup 2} =more » 0.850) and higher in the case of the number of spikelets (R{sup 2} = 0.936) and the number of seeds on a spike (R{sup 2} = 0.960). These results confirm that boron plays an important part in achieving the number of seeds on a spike in the case of wheat, as the contribution of this element to the process of flower fertilization is well-known. In regards to productivity elements, the contribution of macroelements to yield quantity is clear, the contribution of B alone being R{sup 2} = 0.868.« less

  7. Statistical physics of the yielding transition in amorphous solids.

    PubMed

    Karmakar, Smarajit; Lerner, Edan; Procaccia, Itamar

    2010-11-01

    The art of making structural, polymeric, and metallic glasses is rapidly developing with many applications. A limitation is that under increasing external strain all amorphous solids (like their crystalline counterparts) have a finite yield stress which cannot be exceeded without effecting a plastic response which typically leads to mechanical failure. Understanding this is crucial for assessing the risk of failure of glassy materials under mechanical loads. Here we show that the statistics of the energy barriers ΔE that need to be surmounted changes from a probability distribution function that goes smoothly to zero as ΔE=0 to a pdf which is finite at ΔE=0 . This fundamental change implies a dramatic transition in the mechanical stability properties with respect to external strain. We derive exact results for the scaling exponents that characterize the magnitudes of average energy and stress drops in plastic events as a function of system size.

  8. Reliable yields of public water-supply wells in the fractured-rock aquifers of central Maryland, USA

    NASA Astrophysics Data System (ADS)

    Hammond, Patrick A.

    2018-02-01

    Most studies of fractured-rock aquifers are about analytical models used for evaluating aquifer tests or numerical methods for describing groundwater flow, but there have been few investigations on how to estimate the reliable long-term drought yields of individual hard-rock wells. During the drought period of 1998 to 2002, many municipal water suppliers in the Piedmont/Blue Ridge areas of central Maryland (USA) had to institute water restrictions due to declining well yields. Previous estimates of the yields of those wells were commonly based on extrapolating drawdowns, measured during short-term single-well hydraulic pumping tests, to the first primary water-bearing fracture in a well. The extrapolations were often made from pseudo-equilibrium phases, frequently resulting in substantially over-estimated well yields. The methods developed in the present study to predict yields consist of extrapolating drawdown data from infinite acting radial flow periods or by fitting type curves of other conceptual models to the data, using diagnostic plots, inverse analysis and derivative analysis. Available drawdowns were determined by the positions of transition zones in crystalline rocks or thin-bedded consolidated sandstone/limestone layers (reservoir rocks). Aquifer dewatering effects were detected by type-curve matching of step-test data or by breaks in the drawdown curves constructed from hydraulic tests. Operational data were then used to confirm the predicted yields and compared to regional groundwater levels to determine seasonal variations in well yields. Such well yield estimates are needed by hydrogeologists and water engineers for the engineering design of water systems, but should be verified by the collection of long-term monitoring data.

  9. Selection and Reporting of Statistical Methods to Assess Reliability of a Diagnostic Test: Conformity to Recommended Methods in a Peer-Reviewed Journal

    PubMed Central

    Park, Ji Eun; Han, Kyunghwa; Sung, Yu Sub; Chung, Mi Sun; Koo, Hyun Jung; Yoon, Hee Mang; Choi, Young Jun; Lee, Seung Soo; Kim, Kyung Won; Shin, Youngbin; An, Suah; Cho, Hyo-Min

    2017-01-01

    Objective To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Materials and Methods Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Results Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Conclusion Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary. PMID:29089821

  10. Incorporating uncertainty into the ranking of SPARROW model nutrient yields from Mississippi/Atchafalaya River basin watersheds

    USGS Publications Warehouse

    Robertson, Dale M.; Schwarz, Gregory E.; Saad, David A.; Alexander, Richard B.

    2009-01-01

    Excessive loads of nutrients transported by tributary rivers have been linked to hypoxia in the Gulf of Mexico. Management efforts to reduce the hypoxic zone in the Gulf of Mexico and improve the water quality of rivers and streams could benefit from targeting nutrient reductions toward watersheds with the highest nutrient yields delivered to sensitive downstream waters. One challenge is that most conventional watershed modeling approaches (e.g., mechanistic models) used in these management decisions do not consider uncertainties in the predictions of nutrient yields and their downstream delivery. The increasing use of parameter estimation procedures to statistically estimate model coefficients, however, allows uncertainties in these predictions to be reliably estimated. Here, we use a robust bootstrapping procedure applied to the results of a previous application of the hybrid statistical/mechanistic watershed model SPARROW (Spatially Referenced Regression On Watershed attributes) to develop a statistically reliable method for identifying “high priority” areas for management, based on a probabilistic ranking of delivered nutrient yields from watersheds throughout a basin. The method is designed to be used by managers to prioritize watersheds where additional stream monitoring and evaluations of nutrient-reduction strategies could be undertaken. Our ranking procedure incorporates information on the confidence intervals of model predictions and the corresponding watershed rankings of the delivered nutrient yields. From this quantified uncertainty, we estimate the probability that individual watersheds are among a collection of watersheds that have the highest delivered nutrient yields. We illustrate the application of the procedure to 818 eight-digit Hydrologic Unit Code watersheds in the Mississippi/Atchafalaya River basin by identifying 150 watersheds having the highest delivered nutrient yields to the Gulf of Mexico. Highest delivered yields were from

  11. Climate-based statistical regression models for crop yield forecasting of coffee in humid tropical Kerala, India

    NASA Astrophysics Data System (ADS)

    Jayakumar, M.; Rajavel, M.; Surendran, U.

    2016-12-01

    A study on the variability of coffee yield of both Coffea arabica and Coffea canephora as influenced by climate parameters (rainfall (RF), maximum temperature (Tmax), minimum temperature (Tmin), and mean relative humidity (RH)) was undertaken at Regional Coffee Research Station, Chundale, Wayanad, Kerala State, India. The result on the coffee yield data of 30 years (1980 to 2009) revealed that the yield of coffee is fluctuating with the variations in climatic parameters. Among the species, productivity was higher for C. canephora coffee than C. arabica in most of the years. Maximum yield of C. canephora (2040 kg ha-1) was recorded in 2003-2004 and there was declining trend of yield noticed in the recent years. Similarly, the maximum yield of C. arabica (1745 kg ha-1) was recorded in 1988-1989 and decreased yield was noticed in the subsequent years till 1997-1998 due to year to year variability in climate. The highest correlation coefficient was found between the yield of C. arabica coffee and maximum temperature during January (0.7) and between C. arabica coffee yield and RH during July (0.4). Yield of C. canephora coffee had highest correlation with maximum temperature, RH and rainfall during February. Statistical regression model between selected climatic parameters and yield of C. arabica and C. canephora coffee was developed to forecast the yield of coffee in Wayanad district in Kerala. The model was validated for years 2010, 2011, and 2012 with the coffee yield data obtained during the years and the prediction was found to be good.

  12. Design of high-reliability low-cost amorphous silicon modules for high energy yield

    NASA Astrophysics Data System (ADS)

    Jansen, Kai W.; Varvar, Anthony; Twesme, Edward; Berens, Troy; Dhere, Neelkanth G.

    2008-08-01

    For PV modules to fulfill their intended purpose, they must generate sufficient economic return over their lifetime to justify their initial cost. Not only must modules be manufactured at a low cost/Wp with a high energy yield (kWh/kWp), they must also be designed to withstand the significant environmental stresses experienced throughout their 25+ year lifetime. Based on field experience, the most common factors affecting the lifetime energy yield of glass-based amorphous silicon (a-Si) modules have been identified; these include: 1) light-induced degradation; 2) moisture ingress and thin film corrosion; 3) transparent conductive oxide (TCO) delamination; and 4) glass breakage. The current approaches to mitigating the effect of these degradation mechanisms are discussed and the accelerated tests designed to simulate some of the field failures are described. In some cases, novel accelerated tests have been created to facilitate the development of improved manufacturing processes, including a unique test to screen for TCO delamination. Modules using the most reliable designs are tested in high voltage arrays at customer and internal test sites, as well as at independent laboratories. Data from tests at the Florida Solar Energy Center has shown that a-Si tandem modules can demonstrate an energy yield exceeding 1200 kWh/kWp/yr in a subtropical climate. In the same study, the test arrays demonstrated low long-term power loss over two years of data collection, after initial stabilization. The absolute power produced by the test arrays varied seasonally by approximately +/-7%, as expected.

  13. Reliable mortality statistics for Turkey: Are we there yet?

    PubMed

    Özdemir, Raziye; Rao, Chalapati; Öcek, Zeliha; Dinç Horasan, Gönül

    2015-06-10

    The Turkish government has implemented several reforms to improve the Turkish Statistical Institute Death Reporting System (TURKSTAT-DRS) since 2009. However, there has been no assessment to evaluate the impact of these reforms on causes of death statistics. This study attempted to analyse the impact of these reforms on the TURKSTAT-DRS for Turkey, and in the case of Izmir, one of the most developed provinces in Turkey. The evaluation framework comprised three main components each with specific criteria. Firstly, data from TURKSTAT for Turkey and Izmir for the periods 2001-2008 and 2009-2013 were assessed in terms of the following dimensions that represent quality of mortality statistics (a. completeness of death registration, b. trends in proportions of deaths with ill-defined causes). Secondly, the quality of information recorded on individual death certificates from Izmir in 2010 was analysed for a. missing information, b. timeliness of death notifications and c. characteristics of deaths with ill-defined causes. Finally, TURKSTAT data were analysed to estimate life tables and summary mortality indicators for Turkey and Izmir, as well as the leading causes-of-death in Turkey in 2013. Registration of adult deaths in Izmir as well as at the national level for Turkey has considerably improved since the introduction of reforms in 2009, along with marked decline in the proportions of deaths assigned ill-defined causes. Death certificates from Izmir indicated significant gaps in recorded information for demographic as well as epidemiological variables, particularly for infant deaths, and in the detailed recording of causes of death. Life expectancy at birth estimated from local data is 3-4 years higher than similar estimates for Turkey from international studies, and this requires further investigation and confirmation. The TURKSTAT-DRS is now an improved source of mortality and cause of death statistics for Turkey. The reliability and validity of TURKSTAT data needs

  14. Plausible rice yield losses under future climate warming.

    PubMed

    Zhao, Chuang; Piao, Shilong; Wang, Xuhui; Huang, Yao; Ciais, Philippe; Elliott, Joshua; Huang, Mengtian; Janssens, Ivan A; Li, Tao; Lian, Xu; Liu, Yongwen; Müller, Christoph; Peng, Shushi; Wang, Tao; Zeng, Zhenzhong; Peñuelas, Josep

    2016-12-19

    Rice is the staple food for more than 50% of the world's population 1-3 . Reliable prediction of changes in rice yield is thus central for maintaining global food security. This is an extraordinary challenge. Here, we compare the sensitivity of rice yield to temperature increase derived from field warming experiments and three modelling approaches: statistical models, local crop models and global gridded crop models. Field warming experiments produce a substantial rice yield loss under warming, with an average temperature sensitivity of -5.2 ± 1.4% K -1 . Local crop models give a similar sensitivity (-6.3 ± 0.4% K -1 ), but statistical and global gridded crop models both suggest less negative impacts of warming on yields (-0.8 ± 0.3% and -2.4 ± 3.7% K -1 , respectively). Using data from field warming experiments, we further propose a conditional probability approach to constrain the large range of global gridded crop model results for the future yield changes in response to warming by the end of the century (from -1.3% to -9.3% K -1 ). The constraint implies a more negative response to warming (-8.3 ± 1.4% K -1 ) and reduces the spread of the model ensemble by 33%. This yield reduction exceeds that estimated by the International Food Policy Research Institute assessment (-4.2 to -6.4% K -1 ) (ref. 4). Our study suggests that without CO 2 fertilization, effective adaptation and genetic improvement, severe rice yield losses are plausible under intensive climate warming scenarios.

  15. Using operational data to estimate the reliable yields of water-supply wells

    NASA Astrophysics Data System (ADS)

    Misstear, Bruce D. R.; Beeson, Sarah

    The reliable yield of a water-supply well depends on many different factors, including the properties of the well and the aquifer; the capacities of the pumps, raw-water mains, and treatment works; the interference effects from other wells; and the constraints imposed by ion licences, water quality, and environmental issues. A relatively simple methodology for estimating reliable yields has been developed that takes into account all of these factors. The methodology is based mainly on an analysis of water-level and source-output data, where such data are available. Good operational data are especially important when dealing with wells in shallow, unconfined, fissure-flow aquifers, where actual well performance may vary considerably from that predicted using a more analytical approach. Key issues in the yield-assessment process are the identification of a deepest advisable pumping water level, and the collection of the appropriate well, aquifer, and operational data. Although developed for water-supply operators in the United Kingdom, this approach to estimating the reliable yields of water-supply wells using operational data should be applicable to a wide range of hydrogeological conditions elsewhere. Résumé La productivité d'un puits capté pour l'adduction d'eau potable dépend de différents facteurs, parmi lesquels les propriétés du puits et de l'aquifère, la puissance des pompes, le traitement des eaux brutes, les effets d'interférences avec d'autres puits et les contraintes imposées par les autorisations d'exploitation, par la qualité des eaux et par les conditions environnementales. Une méthodologie relativement simple d'estimation de la productivité qui prenne en compte tous ces facteurs a été mise au point. Cette méthodologie est basée surtout sur une analyse des données concernant le niveau piézométrique et le débit de prélèvement, quand ces données sont disponibles. De bonnes données opérationnelles sont particuli

  16. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  17. Statistical complex fatigue data for SAE 4340 steel and its use in design by reliability

    NASA Technical Reports Server (NTRS)

    Kececioglu, D.; Smith, J. L.

    1970-01-01

    A brief description of the complex fatigue machines used in the test program is presented. The data generated from these machines are given and discussed. Two methods of obtaining strength distributions from the data are also discussed. Then follows a discussion of the construction of statistical fatigue diagrams and their use in designing by reliability. Finally, some of the problems encountered in the test equipment and a corrective modification are presented.

  18. High precision and high yield fabrication of dense nanoparticle arrays onto DNA origami at statistically independent binding sites

    NASA Astrophysics Data System (ADS)

    Takabayashi, Sadao; Klein, William P.; Onodera, Craig; Rapp, Blake; Flores-Estrada, Juan; Lindau, Elias; Snowball, Lejmarc; Sam, Joseph T.; Padilla, Jennifer E.; Lee, Jeunghoon; Knowlton, William B.; Graugnard, Elton; Yurke, Bernard; Kuang, Wan; Hughes, William L.

    2014-10-01

    High precision, high yield, and high density self-assembly of nanoparticles into arrays is essential for nanophotonics. Spatial deviations as small as a few nanometers can alter the properties of near-field coupled optical nanostructures. Several studies have reported assemblies of few nanoparticle structures with controlled spacing using DNA nanostructures with variable yield. Here, we report multi-tether design strategies and attachment yields for homo- and hetero-nanoparticle arrays templated by DNA origami nanotubes. Nanoparticle attachment yield via DNA hybridization is comparable with streptavidin-biotin binding. Independent of the number of binding sites, >97% site-occupation was achieved with four tethers and 99.2% site-occupation is theoretically possible with five tethers. The interparticle distance was within 2 nm of all design specifications and the nanoparticle spatial deviations decreased with interparticle spacing. Modified geometric, binomial, and trinomial distributions indicate that site-bridging, steric hindrance, and electrostatic repulsion were not dominant barriers to self-assembly and both tethers and binding sites were statistically independent at high particle densities.High precision, high yield, and high density self-assembly of nanoparticles into arrays is essential for nanophotonics. Spatial deviations as small as a few nanometers can alter the properties of near-field coupled optical nanostructures. Several studies have reported assemblies of few nanoparticle structures with controlled spacing using DNA nanostructures with variable yield. Here, we report multi-tether design strategies and attachment yields for homo- and hetero-nanoparticle arrays templated by DNA origami nanotubes. Nanoparticle attachment yield via DNA hybridization is comparable with streptavidin-biotin binding. Independent of the number of binding sites, >97% site-occupation was achieved with four tethers and 99.2% site-occupation is theoretically possible with five

  19. Assessing the Kansas water-level monitoring program: An example of the application of classical statistics to a geological problem

    USGS Publications Warehouse

    Davis, J.C.

    2000-01-01

    Geologists may feel that geological data are not amenable to statistical analysis, or at best require specialized approaches such as nonparametric statistics and geostatistics. However, there are many circumstances, particularly in systematic studies conducted for environmental or regulatory purposes, where traditional parametric statistical procedures can be beneficial. An example is the application of analysis of variance to data collected in an annual program of measuring groundwater levels in Kansas. Influences such as well conditions, operator effects, and use of the water can be assessed and wells that yield less reliable measurements can be identified. Such statistical studies have resulted in yearly improvements in the quality and reliability of the collected hydrologic data. Similar benefits may be achieved in other geological studies by the appropriate use of classical statistical tools.

  20. Satellite-based assessment of grassland yields

    NASA Astrophysics Data System (ADS)

    Grant, K.; Siegmund, R.; Wagner, M.; Hartmann, S.

    2015-04-01

    Cutting date and frequency are important parameters determining grassland yields in addition to the effects of weather, soil conditions, plant composition and fertilisation. Because accurate and area-wide data of grassland yields are currently not available, cutting frequency can be used to estimate yields. In this project, a method to detect cutting dates via surface changes in radar images is developed. The combination of this method with a grassland yield model will result in more reliable and regional-wide numbers of grassland yields. For the test-phase of the monitoring project, a study area situated southeast of Munich, Germany, was chosen due to its high density of managed grassland. For determining grassland cutting robust amplitude change detection techniques are used evaluating radar amplitude or backscatter statistics before and after the cutting event. CosmoSkyMed and Sentinel-1A data were analysed. All detected cuts were verified according to in-situ measurements recorded in a GIS database. Although the SAR systems had various acquisition geometries, the amount of detected grassland cut was quite similar. Of 154 tested grassland plots, covering in total 436 ha, 116 and 111 cuts were detected using CosmoSkyMed and Sentinel-1A radar data, respectively. Further improvement of radar data processes as well as additional analyses with higher sample number and wider land surface coverage will follow for optimisation of the method and for validation and generalisation of the results of this feasibility study. The automation of this method will than allow for an area-wide and cost efficient cutting date detection service improving grassland yield models.

  1. Using a time-series statistical framework to quantify trends and abrupt change in US corn, soybean, and wheat yields from 1970-2016

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Ives, A. R.; Turner, M. G.; Kucharik, C. J.

    2017-12-01

    Previous studies have identified global agricultural regions where "stagnation" of long-term crop yield increases has occurred. These studies have used a variety of simple statistical methods that often ignore important aspects of time series regression modeling. These methods can lead to differing and contradictory results, which creates uncertainty regarding food security given rapid global population growth. Here, we present a new statistical framework incorporating time series-based algorithms into standard regression models to quantify spatiotemporal yield trends of US maize, soybean, and winter wheat from 1970-2016. Our primary goal was to quantify spatial differences in yield trends for these three crops using USDA county level data. This information was used to identify regions experiencing the largest changes in the rate of yield increases over time, and to determine whether abrupt shifts in the rate of yield increases have occurred. Although crop yields continue to increase in most maize-, soybean-, and winter wheat-growing areas, yield increases have stagnated in some key agricultural regions during the most recent 15 to 16 years: some maize-growing areas, except for the northern Great Plains, have shown a significant trend towards smaller annual yield increases for maize; soybean has maintained an consistent long-term yield gains in the Northern Great Plains, the Midwest, and southeast US, but has experienced a shift to smaller annual increases in other regions; winter wheat maintained a moderate annual increase in eastern South Dakota and eastern US locations, but showed a decline in the magnitude of annual increases across the central Great Plains and western US regions. Our results suggest that there were abrupt shifts in the rate of annual yield increases in a variety of US regions among the three crops. The framework presented here can be broadly applied to additional yield trend analyses for different crops and regions of the Earth.

  2. The effect of the labile organic fraction in food waste and the substrate/inoculum ratio on anaerobic digestion for a reliable methane yield.

    PubMed

    Kawai, Minako; Nagao, Norio; Tajima, Nobuaki; Niwa, Chiaki; Matsuyama, Tatsushi; Toda, Tatsuki

    2014-04-01

    Influence of the labile organic fraction (LOF) on anaerobic digestion of food waste was investigated in different S/I ratio of 0.33, 0.5, 1.0, 2.0 and 4.0g-VSsubstrate/g-VSinoculum. Two types of substrate, standard food waste (Substrate 1) and standard food waste with the supernatant (containing LOF) removed (Substrate 2) were used. Highest methane yield of 435ml-CH4g-VS(-1) in Substrate 1 was observed in the lowest S/I ratio, while the methane yield of the other S/I ratios were 38-73% lower than the highest yield due to acidification. The methane yields in Substrate 2 were relatively stable in all S/I conditions, although the maximum methane yield was low compared with Substrate 1. These results showed that LOF in food waste causes acidification, but also contributes to high methane yields, suggesting that low S/I ratio (<0.33) is required to obtain a reliable methane yield from food waste compared to other organic substrates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. An Acoustic Charge Transport Imager for High Definition Television Applications: Reliability Modeling and Parametric Yield Prediction of GaAs Multiple Quantum Well Avalanche Photodiodes. Degree awarded Oct. 1997

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, K. F.; Summers, C. J.; Yun, Ilgu

    1994-01-01

    Reliability modeling and parametric yield prediction of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiodes (APDs), which are of interest as an ultra-low noise image capture mechanism for high definition systems, have been investigated. First, the effect of various doping methods on the reliability of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiode (APD) structures fabricated by molecular beam epitaxy is investigated. Reliability is examined by accelerated life tests by monitoring dark current and breakdown voltage. Median device lifetime and the activation energy of the degradation mechanism are computed for undoped, doped-barrier, and doped-well APD structures. Lifetimes for each device structure are examined via a statistically designed experiment. Analysis of variance shows that dark-current is affected primarily by device diameter, temperature and stressing time, and breakdown voltage depends on the diameter, stressing time and APD type. It is concluded that the undoped APD has the highest reliability, followed by the doped well and doped barrier devices, respectively. To determine the source of the degradation mechanism for each device structure, failure analysis using the electron-beam induced current method is performed. This analysis reveals some degree of device degradation caused by ionic impurities in the passivation layer, and energy-dispersive spectrometry subsequently verified the presence of ionic sodium as the primary contaminant. However, since all device structures are similarly passivated, sodium contamination alone does not account for the observed variation between the differently doped APDs. This effect is explained by the dopant migration during stressing, which is verified by free carrier concentration measurements using the capacitance-voltage technique.

  4. Economic impacts of climate change on agriculture: a comparison of process-based and statistical yield models

    NASA Astrophysics Data System (ADS)

    Moore, Frances C.; Baldos, Uris Lantz C.; Hertel, Thomas

    2017-06-01

    A large number of studies have been published examining the implications of climate change for agricultural productivity that, broadly speaking, can be divided into process-based modeling and statistical approaches. Despite a general perception that results from these methods differ substantially, there have been few direct comparisons. Here we use a data-base of yield impact studies compiled for the IPCC Fifth Assessment Report (Porter et al 2014) to systematically compare results from process-based and empirical studies. Controlling for differences in representation of CO2 fertilization between the two methods, we find little evidence for differences in the yield response to warming. The magnitude of CO2 fertilization is instead a much larger source of uncertainty. Based on this set of impact results, we find a very limited potential for on-farm adaptation to reduce yield impacts. We use the Global Trade Analysis Project (GTAP) global economic model to estimate welfare consequences of yield changes and find negligible welfare changes for warming of 1 °C-2 °C if CO2 fertilization is included and large negative effects on welfare without CO2. Uncertainty bounds on welfare changes are highly asymmetric, showing substantial probability of large declines in welfare for warming of 2 °C-3 °C even including the CO2 fertilization effect.

  5. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  6. Reliability and validity of a nutrition and physical activity environmental self-assessment for child care

    PubMed Central

    Benjamin, Sara E; Neelon, Brian; Ball, Sarah C; Bangdiwala, Shrikant I; Ammerman, Alice S; Ward, Dianne S

    2007-01-01

    Background Few assessment instruments have examined the nutrition and physical activity environments in child care, and none are self-administered. Given the emerging focus on child care settings as a target for intervention, a valid and reliable measure of the nutrition and physical activity environment is needed. Methods To measure inter-rater reliability, 59 child care center directors and 109 staff completed the self-assessment concurrently, but independently. Three weeks later, a repeat self-assessment was completed by a sub-sample of 38 directors to assess test-retest reliability. To assess criterion validity, a researcher-administered environmental assessment was conducted at 69 centers and was compared to a self-assessment completed by the director. A weighted kappa test statistic and percent agreement were calculated to assess agreement for each question on the self-assessment. Results For inter-rater reliability, kappa statistics ranged from 0.20 to 1.00 across all questions. Test-retest reliability of the self-assessment yielded kappa statistics that ranged from 0.07 to 1.00. The inter-quartile kappa statistic ranges for inter-rater and test-retest reliability were 0.45 to 0.63 and 0.27 to 0.45, respectively. When percent agreement was calculated, questions ranged from 52.6% to 100% for inter-rater reliability and 34.3% to 100% for test-retest reliability. Kappa statistics for validity ranged from -0.01 to 0.79, with an inter-quartile range of 0.08 to 0.34. Percent agreement for validity ranged from 12.9% to 93.7%. Conclusion This study provides estimates of criterion validity, inter-rater reliability and test-retest reliability for an environmental nutrition and physical activity self-assessment instrument for child care. Results indicate that the self-assessment is a stable and reasonably accurate instrument for use with child care interventions. We therefore recommend the Nutrition and Physical Activity Self-Assessment for Child Care (NAP SACC

  7. Ambiguities and conflicting results: the limitations of the kappa statistic in establishing the interrater reliability of the Irish nursing minimum data set for mental health: a discussion paper.

    PubMed

    Morris, Roisin; MacNeela, Padraig; Scott, Anne; Treacy, Pearl; Hyde, Abbey; O'Brien, Julian; Lehwaldt, Daniella; Byrne, Anne; Drennan, Jonathan

    2008-04-01

    In a study to establish the interrater reliability of the Irish Nursing Minimum Data Set (I-NMDS) for mental health difficulties relating to the choice of reliability test statistic were encountered. The objective of this paper is to highlight the difficulties associated with testing interrater reliability for an ordinal scale using a relatively homogenous sample and the recommended kw statistic. One pair of mental health nurses completed the I-NMDS for mental health for a total of 30 clients attending a mental health day centre over a two-week period. Data was analysed using the kw and percentage agreement statistics. A total of 34 of the 38 I-NMDS for mental health variables with lower than acceptable levels of kw reliability scores achieved acceptable levels of reliability according to their percentage agreement scores. The study findings implied that, due to the homogeneity of the sample, low variability within the data resulted in the 'base rate problem' associated with the use of kw statistic. Conclusions point to the interpretation of kw in tandem with percentage agreement scores. Suggestions that kw scores were low due to chance agreement and that one should strive to use a study sample with known variability are queried.

  8. Using statistical model to simulate the impact of climate change on maize yield with climate and crop uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Zhao, Yanxia; Wang, Chunyi; Chen, Sining

    2017-11-01

    Assessment of the impact of climate change on crop productions with considering uncertainties is essential for properly identifying and decision-making agricultural practices that are sustainable. In this study, we employed 24 climate projections consisting of the combinations of eight GCMs and three emission scenarios representing the climate projections uncertainty, and two crop statistical models with 100 sets of parameters in each model representing parameter uncertainty within the crop models. The goal of this study was to evaluate the impact of climate change on maize ( Zea mays L.) yield at three locations (Benxi, Changling, and Hailun) across Northeast China (NEC) in periods 2010-2039 and 2040-2069, taking 1976-2005 as the baseline period. The multi-models ensembles method is an effective way to deal with the uncertainties. The results of ensemble simulations showed that maize yield reductions were less than 5 % in both future periods relative to the baseline. To further understand the contributions of individual sources of uncertainty, such as climate projections and crop model parameters, in ensemble yield simulations, variance decomposition was performed. The results indicated that the uncertainty from climate projections was much larger than that contributed by crop model parameters. Increased ensemble yield variance revealed the increasing uncertainty in the yield simulation in the future periods.

  9. Experimental programme on absolute fission fragment yields with the lohengrin spectrometer: New optical and statistical methodologies

    NASA Astrophysics Data System (ADS)

    Abdelaziz, Chebboubi; Grégoire, Kessedjian; Olivier, Serot; Sylvain, Julien-Laferriere; Christophe, Sage; Florence, Martin; Olivier, Méplan; David, Bernard; Olivier, Litaize; Aurélien, Blanc; Herbert, Faust; Paolo, Mutti; Ulli, Köster; Alain, Letourneau; Thomas, Materna; Michal, Rapala

    2017-09-01

    The study of fission yields has a major impact on the characterization and understanding of the fission process and is mandatory for reactor applications. In the past with the LOHENGRIN spectrometer of the ILL, priority has been given for the studies in the light fission fragment mass range. The LPSC in collaboration with ILL and CEA has developed a measurement program on symmetric and heavy mass fission fragment distributions. The combination of measurements with ionisation chamber and Ge detectors is necessary to describe precisely the heavy fission fragment region in mass and charge. Recently, new measurements of fission yields and kinetic energy distributions are has been made on the 233U(nth,f) reaction. The focus of this work has been on the new optical and statistical methodology and the self-normalization of the data to provide new absolute measurements, independently of any libraries, and the associated experimental covariance matrix.

  10. Common pitfalls in statistical analysis: Clinical versus statistical significance

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In clinical research, study results, which are statistically significant are often interpreted as being clinically important. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice. The third article in this series exploring pitfalls in statistical analysis clarifies the importance of differentiating between statistical significance and clinical significance. PMID:26229754

  11. Test-retest reliability of high angular resolution diffusion imaging acquisition within medial temporal lobe connections assessed via tract based spatial statistics, probabilistic tractography and a novel graph theory metric.

    PubMed

    Kuhn, T; Gullett, J M; Nguyen, P; Boutzoukas, A E; Ford, A; Colon-Perez, L M; Triplett, W; Carney, P R; Mareci, T H; Price, C C; Bauer, R M

    2016-06-01

    This study examined the reliability of high angular resolution diffusion tensor imaging (HARDI) data collected on a single individual across several sessions using the same scanner. HARDI data was acquired for one healthy adult male at the same time of day on ten separate days across a one-month period. Environmental factors (e.g. temperature) were controlled across scanning sessions. Tract Based Spatial Statistics (TBSS) was used to assess session-to-session variability in measures of diffusion, fractional anisotropy (FA) and mean diffusivity (MD). To address reliability within specific structures of the medial temporal lobe (MTL; the focus of an ongoing investigation), probabilistic tractography segmented the Entorhinal cortex (ERc) based on connections with Hippocampus (HC), Perirhinal (PRc) and Parahippocampal (PHc) cortices. Streamline tractography generated edge weight (EW) metrics for the aforementioned ERc connections and, as comparison regions, connections between left and right rostral and caudal anterior cingulate cortex (ACC). Coefficients of variation (CoV) were derived for the surface area and volumes of these ERc connectivity-defined regions (CDR) and for EW across all ten scans, expecting that scan-to-scan reliability would yield low CoVs. TBSS revealed no significant variation in FA or MD across scanning sessions. Probabilistic tractography successfully reproduced histologically-verified adjacent medial temporal lobe circuits. Tractography-derived metrics displayed larger ranges of scanner-to-scanner variability. Connections involving HC displayed greater variability than metrics of connection between other investigated regions. By confirming the test retest reliability of HARDI data acquisition, support for the validity of significant results derived from diffusion data can be obtained.

  12. AMMI adjustment for statistical analysis of an international wheat yield trial.

    PubMed

    Crossa, J; Fox, P N; Pfeiffer, W H; Rajaram, S; Gauch, H G

    1991-01-01

    Multilocation trials are important for the CIMMYT Bread Wheat Program in producing high-yielding, adapted lines for a wide range of environments. This study investigated procedures for improving predictive success of a yield trial, grouping environments and genotypes into homogeneous subsets, and determining the yield stability of 18 CIMMYT bread wheats evaluated at 25 locations. Additive Main effects and Multiplicative Interaction (AMMI) analysis gave more precise estimates of genotypic yields within locations than means across replicates. This precision facilitated formation by cluster analysis of more cohesive groups of genotypes and locations for biological interpretation of interactions than occurred with unadjusted means. Locations were clustered into two subsets for which genotypes with positive interactions manifested in high, stable yields were identified. The analyses highlighted superior selections with both broad and specific adaptation.

  13. Interrelation Between Safety Factors and Reliability

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

  14. Evaluating the capabilities of watershed-scale models in estimating sediment yield at field-scale.

    PubMed

    Sommerlot, Andrew R; Nejadhashemi, A Pouyan; Woznicki, Sean A; Giri, Subhasis; Prohaska, Michael D

    2013-09-30

    Many watershed model interfaces have been developed in recent years for predicting field-scale sediment loads. They share the goal of providing data for decisions aimed at improving watershed health and the effectiveness of water quality conservation efforts. The objectives of this study were to: 1) compare three watershed-scale models (Soil and Water Assessment Tool (SWAT), Field_SWAT, and the High Impact Targeting (HIT) model) against calibrated field-scale model (RUSLE2) in estimating sediment yield from 41 randomly selected agricultural fields within the River Raisin watershed; 2) evaluate the statistical significance among models; 3) assess the watershed models' capabilities in identifying areas of concern at the field level; 4) evaluate the reliability of the watershed-scale models for field-scale analysis. The SWAT model produced the most similar estimates to RUSLE2 by providing the closest median and the lowest absolute error in sediment yield predictions, while the HIT model estimates were the worst. Concerning statistically significant differences between models, SWAT was the only model found to be not significantly different from the calibrated RUSLE2 at α = 0.05. Meanwhile, all models were incapable of identifying priorities areas similar to the RUSLE2 model. Overall, SWAT provided the most correct estimates (51%) within the uncertainty bounds of RUSLE2 and is the most reliable among the studied models, while HIT is the least reliable. The results of this study suggest caution should be exercised when using watershed-scale models for field level decision-making, while field specific data is of paramount importance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Normal yield tables for red alder.

    Treesearch

    Norman P. Worthington; Floyd A. Johnson; George R. Staebler; William J. Lloyd

    1960-01-01

    Increasing interest in the management of red alder (Alnus rubra) has created a need for reliable yield information. Existing yield tables for red alder have been very useful as interim sources of information, but they are generally inadequate for current and prospective management needs. The advisory committee for the Station's Olympia...

  16. System statistical reliability model and analysis

    NASA Technical Reports Server (NTRS)

    Lekach, V. S.; Rood, H.

    1973-01-01

    A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.

  17. Reliability of Autism-Tics, AD/HD, and other Comorbidities (A-TAC) inventory in a test-retest design.

    PubMed

    Larson, Tomas; Kerekes, Nóra; Selinus, Eva Norén; Lichtenstein, Paul; Gumpert, Clara Hellner; Anckarsäter, Henrik; Nilsson, Thomas; Lundström, Sebastian

    2014-02-01

    The Autism-Tics, AD/HD, and other Comorbidities (A-TAC) inventory is used in epidemiological research to assess neurodevelopmental problems and coexisting conditions. Although the A-TAC has been applied in various populations, data on retest reliability are limited. The objective of the present study was to present additional reliability data. The A-TAC was administered by lay assessors and was completed on two occasions by parents of 400 individual twins, with an average interval of 70 days between test sessions. Intra- and inter-rater reliability were analysed with intraclass correlations and Cohen's kappa. A-TAC showed excellent test-retest intraclass correlations for both autism spectrum disorder and attention deficit hyperactivity disorder (each at .84). Most modules in the A-TAC had intra- and inter-rater reliability intraclass correlation coefficients of > or = .60. Cohen's kappa indi- cated acceptable reliability. The current study provides statistical evidence that the A-TAC yields good test-retest reliability in a population-based cohort of children.

  18. Assessment of Reliable Change Using 95% Credible Intervals for the Differences in Proportions: A Statistical Analysis for Case-Study Methodology.

    PubMed

    Unicomb, Rachael; Colyvas, Kim; Harrison, Elisabeth; Hewat, Sally

    2015-06-01

    Case-study methodology studying change is often used in the field of speech-language pathology, but it can be criticized for not being statistically robust. Yet with the heterogeneous nature of many communication disorders, case studies allow clinicians and researchers to closely observe and report on change. Such information is valuable and can further inform large-scale experimental designs. In this research note, a statistical analysis for case-study data is outlined that employs a modification to the Reliable Change Index (Jacobson & Truax, 1991). The relationship between reliable change and clinical significance is discussed. Example data are used to guide the reader through the use and application of this analysis. A method of analysis is detailed that is suitable for assessing change in measures with binary categorical outcomes. The analysis is illustrated using data from one individual, measured before and after treatment for stuttering. The application of this approach to assess change in categorical, binary data has potential application in speech-language pathology. It enables clinicians and researchers to analyze results from case studies for their statistical and clinical significance. This new method addresses a gap in the research design literature, that is, the lack of analysis methods for noncontinuous data (such as counts, rates, proportions of events) that may be used in case-study designs.

  19. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  20. Nanowire growth process modeling and reliability models for nanodevices

    NASA Astrophysics Data System (ADS)

    Fathi Aghdam, Faranak

    Nowadays, nanotechnology is becoming an inescapable part of everyday life. The big barrier in front of its rapid growth is our incapability of producing nanoscale materials in a reliable and cost-effective way. In fact, the current yield of nano-devices is very low (around 10 %), which makes fabrications of nano-devices very expensive and uncertain. To overcome this challenge, the first and most important step is to investigate how to control nano-structure synthesis variations. The main directions of reliability research in nanotechnology can be classified either from a material perspective or from a device perspective. The first direction focuses on restructuring materials and/or optimizing process conditions at the nano-level (nanomaterials). The other direction is linked to nano-devices and includes the creation of nano-electronic and electro-mechanical systems at nano-level architectures by taking into account the reliability of future products. In this dissertation, we have investigated two topics on both nano-materials and nano-devices. In the first research work, we have studied the optimization of one of the most important nanowire growth processes using statistical methods. Research on nanowire growth with patterned arrays of catalyst has shown that the wire-to-wire spacing is an important factor affecting the quality of resulting nanowires. To improve the process yield and the length uniformity of fabricated nanowires, it is important to reduce the resource competition between nanowires during the growth process. We have proposed a physical-statistical nanowire-interaction model considering the shadowing effect and shared substrate diffusion area to determine the optimal pitch that would ensure the minimum competition between nanowires. A sigmoid function is used in the model, and the least squares estimation method is used to estimate the model parameters. The estimated model is then used to determine the optimal spatial arrangement of catalyst arrays

  1. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  2. Brazilian Soybean Yields and Yield Gaps Vary with Farm Size

    NASA Astrophysics Data System (ADS)

    Jeffries, G. R.; Cohn, A.; Griffin, T. S.; Bragança, A.

    2017-12-01

    Understanding the farm size-specific characteristics of crop yields and yield gaps may help to improve yields by enabling better targeting of technical assistance and agricultural development programs. Linking remote sensing-based yield estimates with property boundaries provides a novel view of the relationship between farm size and yield structure (yield magnitude, gaps, and stability over time). A growing literature documents variations in yield gaps, but largely ignores the role of farm size as a factor shaping yield structure. Research on the inverse farm size-productivity relationship (IR) theory - that small farms are more productive than large ones all else equal - has documented that yield magnitude may vary by farm size, but has not considered other yield structure characteristics. We examined farm size - yield structure relationships for soybeans in Brazil for years 2001-2015. Using out-of-sample soybean yield predictions from a statistical model, we documented 1) gaps between the 95th percentile of attained yields and mean yields within counties and individual fields, and 2) yield stability defined as the standard deviation of time-detrended yields at given locations. We found a direct relationship between soy yields and farm size at the national level, while the strength and the sign of the relationship varied by region. Soybean yield gaps were found to be inversely related to farm size metrics, even when yields were only compared to farms of similar size. The relationship between farm size and yield stability was nonlinear, with mid-sized farms having the most stable yields. The work suggests that farm size is an important factor in understanding yield structure and that opportunities for improving soy yields in Brazil are greatest among smaller farms.

  3. Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds

    USGS Publications Warehouse

    Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark

    2009-01-01

    Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San

  4. Grapevine canopy reflectance and yield

    NASA Technical Reports Server (NTRS)

    Minden, K. A.; Philipson, W. R.

    1982-01-01

    Field spectroradiometric and airborne multispectral scanner data were applied in a study of Concord grapevines. Spectroradiometric measurements of 18 experimental vines were collected on three dates during one growing season. Spectral reflectance, determined at 30 intervals from 0.4 to 1.1 microns, was correlated with vine yield, pruning weight, clusters/vine, and nitrogen input. One date of airborne multispectral scanner data (11 channels) was collected over commercial vineyards, and the average radiance values for eight vineyard sections were correlated with the corresponding average yields. Although some correlations were significant, they were inadequate for developing a reliable yield prediction model.

  5. A reliability study on brain activation during active and passive arm movements supported by an MRI-compatible robot.

    PubMed

    Estévez, Natalia; Yu, Ningbo; Brügger, Mike; Villiger, Michael; Hepp-Reymond, Marie-Claude; Riener, Robert; Kollias, Spyros

    2014-11-01

    In neurorehabilitation, longitudinal assessment of arm movement related brain function in patients with motor disability is challenging due to variability in task performance. MRI-compatible robots monitor and control task performance, yielding more reliable evaluation of brain function over time. The main goals of the present study were first to define the brain network activated while performing active and passive elbow movements with an MRI-compatible arm robot (MaRIA) in healthy subjects, and second to test the reproducibility of this activation over time. For the fMRI analysis two models were compared. In model 1 movement onset and duration were included, whereas in model 2 force and range of motion were added to the analysis. Reliability of brain activation was tested with several statistical approaches applied on individual and group activation maps and on summary statistics. The activated network included mainly the primary motor cortex, primary and secondary somatosensory cortex, superior and inferior parietal cortex, medial and lateral premotor regions, and subcortical structures. Reliability analyses revealed robust activation for active movements with both fMRI models and all the statistical methods used. Imposed passive movements also elicited mainly robust brain activation for individual and group activation maps, and reliability was improved by including additional force and range of motion using model 2. These findings demonstrate that the use of robotic devices, such as MaRIA, can be useful to reliably assess arm movement related brain activation in longitudinal studies and may contribute in studies evaluating therapies and brain plasticity following injury in the nervous system.

  6. A standard for test reliability in group research.

    PubMed

    Ellis, Jules L

    2013-03-01

    Many authors adhere to the rule that test reliabilities should be at least .70 or .80 in group research. This article introduces a new standard according to which reliabilities can be evaluated. This standard is based on the costs or time of the experiment and of administering the test. For example, if test administration costs are 7 % of the total experimental costs, the efficient value of the reliability is .93. If the actual reliability of a test is equal to this efficient reliability, the test size maximizes the statistical power of the experiment, given the costs. As a standard in experimental research, it is proposed that the reliability of the dependent variable be close to the efficient reliability. Adhering to this standard will enhance the statistical power and reduce the costs of experiments.

  7. Evaluation of the CEAS model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  8. LACIE: Wheat yield models for the USSR

    NASA Technical Reports Server (NTRS)

    Sakamoto, C. M.; Leduc, S. K.

    1977-01-01

    A quantitative model determining the relationship between weather conditions and wheat yield in the U.S.S.R. was studied to provide early reliable forecasts on the size of the U.S.S.R. wheat harvest. Separate models are developed for spring wheat and for winter. Differences in yield potential and responses to stress conditions and cultural improvements necessitate models for each class.

  9. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  10. Assessment of NDE reliability data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.

    1975-01-01

    Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.

  11. The reliability of cause-of-death coding in The Netherlands.

    PubMed

    Harteloh, Peter; de Bruin, Kim; Kardaun, Jan

    2010-08-01

    Cause-of-death statistics are a major source of information for epidemiological research or policy decisions. Information on the reliability of these statistics is important for interpreting trends in time or differences between populations. Variations in coding the underlying cause of death could hinder the attribution of observed differences to determinants of health. Therefore we studied the reliability of cause-of-death statistics in The Netherlands. We performed a double coding study. Death certificates from the month of May 2005 were coded again in 2007. Each death certificate was coded manually by four coders. Reliability was measured by calculating agreement between coders (intercoder agreement) and by calculating the consistency of each individual coder in time (intracoder agreement). Our analysis covered an amount of 10,833 death certificates. The intercoder agreement of four coders on the underlying cause of death was 78%. In 2.2% of the cases coders agreed on a change of the code assigned in 2005. The (mean) intracoder agreement of four coders was 89%. Agreement was associated with the specificity of the ICD-10 code (chapter, three digits, four digits), the age of the deceased, the number of coders and the number of diseases reported on the death certificate. The reliability of cause-of-death statistics turned out to be high (>90%) for major causes of death such as cancers and acute myocardial infarction. For chronic diseases, such as diabetes and renal insufficiency, reliability was low (<70%). The reliability of cause-of-death statistics varies by ICD-10 code/chapter. A statistical office should provide coders with (additional) rules for coding diseases with a low reliability and evaluate these rules regularly. Users of cause-of-death statistics should exercise caution when interpreting causes of death with a low reliability. Studies of reliability should take into account the number of coders involved and the number of codes on a death certificate.

  12. Patients and Medical Statistics

    PubMed Central

    Woloshin, Steven; Schwartz, Lisa M; Welch, H Gilbert

    2005-01-01

    BACKGROUND People are increasingly presented with medical statistics. There are no existing measures to assess their level of interest or confidence in using medical statistics. OBJECTIVE To develop 2 new measures, the STAT-interest and STAT-confidence scales, and assess their reliability and validity. DESIGN Survey with retest after approximately 2 weeks. SUBJECTS Two hundred and twenty-four people were recruited from advertisements in local newspapers, an outpatient clinic waiting area, and a hospital open house. MEASURES We developed and revised 5 items on interest in medical statistics and 3 on confidence understanding statistics. RESULTS Study participants were mostly college graduates (52%); 25% had a high school education or less. The mean age was 53 (range 20 to 84) years. Most paid attention to medical statistics (6% paid no attention). The mean (SD) STAT-interest score was 68 (17) and ranged from 15 to 100. Confidence in using statistics was also high: the mean (SD) STAT-confidence score was 65 (19) and ranged from 11 to 100. STAT-interest and STAT-confidence scores were moderately correlated (r=.36, P<.001). Both scales demonstrated good test–retest repeatability (r=.60, .62, respectively), internal consistency reliability (Cronbach's α=0.70 and 0.78), and usability (individual item nonresponse ranged from 0% to 1.3%). Scale scores correlated only weakly with scores on a medical data interpretation test (r=.15 and .26, respectively). CONCLUSION The STAT-interest and STAT-confidence scales are usable and reliable. Interest and confidence were only weakly related to the ability to actually use data. PMID:16307623

  13. Improving precision of forage yield trials: A case study

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to several facto...

  14. Statistical Bayesian method for reliability evaluation based on ADT data

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  15. Fusion yield: Guderley model and Tsallis statistics

    NASA Astrophysics Data System (ADS)

    Haubold, H. J.; Kumar, D.

    2011-02-01

    The reaction rate probability integral is extended from Maxwell-Boltzmann approach to a more general approach by using the pathway model introduced by Mathai in 2005 (A pathway to matrix-variate gamma and normal densities. Linear Algebr. Appl. 396, 317-328). The extended thermonuclear reaction rate is obtained in the closed form via a Meijer's G-function and the so-obtained G-function is represented as a solution of a homogeneous linear differential equation. A physical model for the hydrodynamical process in a fusion plasma-compressed and laser-driven spherical shock wave is used for evaluating the fusion energy integral by integrating the extended thermonuclear reaction rate integral over the temperature. The result obtained is compared with the standard fusion yield obtained by Haubold and John in 1981 (Analytical representation of the thermonuclear reaction rate and fusion energy production in a spherical plasma shock wave. Plasma Phys. 23, 399-411). An interpretation for the pathway parameter is also given.

  16. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  17. Evaluation of Solid Rocket Motor Component Data Using a Commercially Available Statistical Software Package

    NASA Technical Reports Server (NTRS)

    Stefanski, Philip L.

    2015-01-01

    Commercially available software packages today allow users to quickly perform the routine evaluations of (1) descriptive statistics to numerically and graphically summarize both sample and population data, (2) inferential statistics that draws conclusions about a given population from samples taken of it, (3) probability determinations that can be used to generate estimates of reliability allowables, and finally (4) the setup of designed experiments and analysis of their data to identify significant material and process characteristics for application in both product manufacturing and performance enhancement. This paper presents examples of analysis and experimental design work that has been conducted using Statgraphics®(Registered Trademark) statistical software to obtain useful information with regard to solid rocket motor propellants and internal insulation material. Data were obtained from a number of programs (Shuttle, Constellation, and Space Launch System) and sources that include solid propellant burn rate strands, tensile specimens, sub-scale test motors, full-scale operational motors, rubber insulation specimens, and sub-scale rubber insulation analog samples. Besides facilitating the experimental design process to yield meaningful results, statistical software has demonstrated its ability to quickly perform complex data analyses and yield significant findings that might otherwise have gone unnoticed. One caveat to these successes is that useful results not only derive from the inherent power of the software package, but also from the skill and understanding of the data analyst.

  18. Uncertainty quantification and reliability assessment in operational oil spill forecast modeling system.

    PubMed

    Hou, Xianlong; Hodges, Ben R; Feng, Dongyu; Liu, Qixiao

    2017-03-15

    As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Determining Functional Reliability of Pyrotechnic Mechanical Devices

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.; Multhaup, Herbert A.

    1997-01-01

    This paper describes a new approach for evaluating mechanical performance and predicting the mechanical functional reliability of pyrotechnic devices. Not included are other possible failure modes, such as the initiation of the pyrotechnic energy source. The requirement of hundreds or thousands of consecutive, successful tests on identical components for reliability predictions, using the generally accepted go/no-go statistical approach routinely ignores physics of failure. The approach described in this paper begins with measuring, understanding and controlling mechanical performance variables. Then, the energy required to accomplish the function is compared to that delivered by the pyrotechnic energy source to determine mechanical functional margin. Finally, the data collected in establishing functional margin is analyzed to predict mechanical functional reliability, using small-sample statistics. A careful application of this approach can provide considerable cost improvements and understanding over that of go/no-go statistics. Performance and the effects of variables can be defined, and reliability predictions can be made by evaluating 20 or fewer units. The application of this approach to a pin puller used on a successful NASA mission is provided as an example.

  20. Reliability and validity of the McDonald Play Inventory.

    PubMed

    McDonald, Ann E; Vigen, Cheryl

    2012-01-01

    This study examined the ability of a two-part self-report instrument, the McDonald Play Inventory, to reliably and validly measure the play activities and play styles of 7- to 11-yr-old children and to discriminate between the play of neurotypical children and children with known learning and developmental disabilities. A total of 124 children ages 7-11 recruited from a sample of convenience and a subsample of 17 parents participated in this study. Reliability estimates yielded moderate correlations for internal consistency, total test intercorrelations, and test-retest reliability. Validity estimates were established for content and construct validity. The results suggest that a self-report instrument yields reliable and valid measures of a child's perceived play performance and discriminates between the play of children with and without disabilities. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  1. A hierarchical spatial model for well yield in complex aquifers

    NASA Astrophysics Data System (ADS)

    Montgomery, J.; O'sullivan, F.

    2017-12-01

    Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.

  2. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  3. A multivariate model and statistical method for validating tree grade lumber yield equations

    Treesearch

    Donald W. Seegrist

    1975-01-01

    Lumber yields within lumber grades can be described by a multivariate linear model. A method for validating lumber yield prediction equations when there are several tree grades is presented. The method is based on multivariate simultaneous test procedures.

  4. Statistical Optimization of 1,3-Propanediol (1,3-PD) Production from Crude Glycerol by Considering Four Objectives: 1,3-PD Concentration, Yield, Selectivity, and Productivity.

    PubMed

    Supaporn, Pansuwan; Yeom, Sung Ho

    2018-04-30

    This study investigated the biological conversion of crude glycerol generated from a commercial biodiesel production plant as a by-product to 1,3-propanediol (1,3-PD). Statistical analysis was employed to derive a statistical model for the individual and interactive effects of glycerol, (NH 4 ) 2 SO 4 , trace elements, pH, and cultivation time on the four objectives: 1,3-PD concentration, yield, selectivity, and productivity. Optimum conditions for each objective with its maximum value were predicted by statistical optimization, and experiments under the optimum conditions verified the predictions. In addition, by systematic analysis of the values of four objectives, optimum conditions for 1,3-PD concentration (49.8 g/L initial glycerol, 4.0 g/L of (NH 4 ) 2 SO 4 , 2.0 mL/L of trace element, pH 7.5, and 11.2 h of cultivation time) were determined to be the global optimum culture conditions for 1,3-PD production. Under these conditions, we could achieve high 1,3-PD yield (47.4%), 1,3-PD selectivity (88.8%), and 1,3-PD productivity (2.1/g/L/h) as well as high 1,3-PD concentration (23.6 g/L).

  5. Reliability of reference distances used in photogrammetry.

    PubMed

    Aksu, Muge; Kaya, Demet; Kocadereli, Ilken

    2010-07-01

    To determine the reliability of the reference distances used for photogrammetric assessment. The sample consisted of 100 subjects with mean ages of 22.97 +/- 2.98 years. Five lateral and four frontal parameters were measured directly on the subjects' faces. For photogrammetric assessment, two reference distances for the profile view and three reference distances for the frontal view were established. Standardized photographs were taken and all the parameters that had been measured directly on the face were measured on the photographs. The reliability of the reference distances was checked by comparing direct and indirect values of the parameters obtained from the subjects' faces and photographs. Repeated measure analysis of variance (ANOVA) and Bland-Altman analyses were used for statistical assessment. For profile measurements, the indirect values measured were statistically different from the direct values except for Sn-Sto in male subjects and Prn-Sn and Sn-Sto in female subjects. The indirect values of Prn-Sn and Sn-Sto were reliable in both sexes. The poorest results were obtained in the indirect values of the N-Sn parameter for female subjects and the Sn-Me parameter for male subjects according to the Sa-Sba reference distance. For frontal measurements, the indirect values were statistically different from the direct values in both sexes except for one in male subjects. The indirect values measured were not statistically different from the direct values for Go-Go. The indirect values of Ch-Ch were reliable in male subjects. The poorest results were obtained according to the P-P reference distance. For profile assessment, the T-Ex reference distance was reliable for Prn-Sn and Sn-Sto in both sexes. For frontal assessment, Ex-Ex and En-En reference distances were reliable for Ch-Ch in male subjects.

  6. Automatic yield-line analysis of slabs using discontinuity layout optimization

    PubMed Central

    Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.

    2014-01-01

    The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905

  7. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    ERIC Educational Resources Information Center

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  8. Simulating the exoplanet yield of a space-based mid-infrared interferometer based on Kepler statistics

    NASA Astrophysics Data System (ADS)

    Kammerer, Jens; Quanz, Sascha P.

    2018-01-01

    Aims: We predict the exoplanet yield of a space-based mid-infrared nulling interferometer using Monte Carlo simulations. We quantify the number and properties of detectable exoplanets and identify those target stars that have the highest or most complete detection rate. We investigate how changes in the underlying technical assumptions and uncertainties in the underlying planet population impact the scientific return. Methods: We simulated 2000 exoplanetary systems, based on planet occurrence statistics from Kepler with randomly orientated orbits and uniformly distributed albedos around each of 326 nearby (d< 20 pc) stars. Assuming thermal equilibrium and blackbody emission, together with the limiting spatial resolution and sensitivity of our simulated instrument in the three specific bands 5.6, 10.0, and 15.0 μm, we quantified the number of detectable exoplanets as a function of their radii and equilibrium temperatures. Results: Approximately exoplanets, with radii 0.5 REarth ≤ Rp ≤ 6 REarth, were detected in at least one band and half were detected in all three bands during 0.52 years of mission time assuming throughputs 3.5 times worse than those for the James Webb Space Telescope and 40% overheads. Accounting for stellar leakage and (unknown) exozodiacal light, the discovery phase of the mission very likely requires 2-3 years in total. The uncertainties in planet yield are dominated by uncertainties in the underlying planet population, but the distribution of the Bond albedos also has a significant impact. Roughly 50% of the detected planets orbit M stars, which also have the highest planet yield per star; the other 50% orbit FGK stars, which show a higher completeness in the detectability. Roughly 85 planets could be habitable (0.5 REarth ≤ Rp ≤ 1.75 REarth and 200 K ≤ Teq ≤ 450 K) and are prime targets for spectroscopic observations in a second mission phase. Comparing these results to those of a large optical/near-infrared telescope, we find

  9. Calculating stage duration statistics in multistage diseases.

    PubMed

    Komarova, Natalia L; Thalhauser, Craig J

    2011-01-01

    Many human diseases are characterized by multiple stages of progression. While the typical sequence of disease progression can be identified, there may be large individual variations among patients. Identifying mean stage durations and their variations is critical for statistical hypothesis testing needed to determine if treatment is having a significant effect on the progression, or if a new therapy is showing a delay of progression through a multistage disease. In this paper we focus on two methods for extracting stage duration statistics from longitudinal datasets: an extension of the linear regression technique, and a counting algorithm. Both are non-iterative, non-parametric and computationally cheap methods, which makes them invaluable tools for studying the epidemiology of diseases, with a goal of identifying different patterns of progression by using bioinformatics methodologies. Here we show that the regression method performs well for calculating the mean stage durations under a wide variety of assumptions, however, its generalization to variance calculations fails under realistic assumptions about the data collection procedure. On the other hand, the counting method yields reliable estimations for both means and variances of stage durations. Applications to Alzheimer disease progression are discussed.

  10. Weather-based forecasts of California crop yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobell, D B; Cahill, K N; Field, C B

    2005-09-26

    Crop yield forecasts provide useful information to a range of users. Yields for several crops in California are currently forecast based on field surveys and farmer interviews, while for many crops official forecasts do not exist. As broad-scale crop yields are largely dependent on weather, measurements from existing meteorological stations have the potential to provide a reliable, timely, and cost-effective means to anticipate crop yields. We developed weather-based models of state-wide yields for 12 major California crops (wine grapes, lettuce, almonds, strawberries, table grapes, hay, oranges, cotton, tomatoes, walnuts, avocados, and pistachios), and tested their accuracy using cross-validation over themore » 1980-2003 period. Many crops were forecast with high accuracy, as judged by the percent of yield variation explained by the forecast, the number of yields with correctly predicted direction of yield change, or the number of yields with correctly predicted extreme yields. The most successfully modeled crop was almonds, with 81% of yield variance captured by the forecast. Predictions for most crops relied on weather measurements well before harvest time, allowing for lead times that were longer than existing procedures in many cases.« less

  11. Lactation persistency as a component trait of the selection index and increase in reliability by using single nucleotide polymorphism in net merit defined as the first five lactation milk yields and herd life.

    PubMed

    Togashi, K; Hagiya, K; Osawa, T; Nakanishi, T; Yamazaki, T; Nagamine, Y; Lin, C Y; Matsumoto, S; Aihara, M; Hayasaka, K

    2012-08-01

    We first sought to clarify the effects of discounted rate, survival rate, and lactation persistency as a component trait of the selection index on net merit, defined as the first five lactation milks and herd life (HL) weighted by 1 and 0.389 (currently used in Japan), respectively, in units of genetic standard deviation. Survival rate increased the relative economic importance of later lactation traits and the first five lactation milk yields during the first 120 months from the start of the breeding scheme. In contrast, reliabilities of the estimated breeding value (EBV) in later lactation traits are lower than those of earlier lactation traits. We then sought to clarify the effects of applying single nucleotide polymorphism (SNP) on net merit to improve the reliability of EBV of later lactation traits to maximize their increased economic importance due to increase in survival rate. Net merit, selection accuracy, and HL increased by adding lactation persistency to the selection index whose component traits were only milk yields. Lactation persistency of the second and (especially) third parities contributed to increasing HL while maintaining the first five lactation milk yields compared with the selection index whose only component traits were milk yields. A selection index comprising the first three lactation milk yields and persistency accounted for 99.4% of net merit derived from a selection index whose components were identical to those for net merit. We consider that the selection index comprising the first three lactation milk yields and persistency is a practical method for increasing lifetime milk yield in the absence of data regarding HL. Applying SNP to the second- and third-lactation traits and HL increased net merit and HL by maximizing the increased economic importance of later lactation traits, reducing the effect of first-lactation milk yield on HL (genetic correlation (rG) = -0.006), and by augmenting the effects of the second- and third

  12. Electronics reliability and measurement technology

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S. (Editor)

    1987-01-01

    A summary is presented of the Electronics Reliability and Measurement Technology Workshop. The meeting examined the U.S. electronics industry with particular focus on reliability and state-of-the-art technology. A general consensus of the approximately 75 attendees was that "the U.S. electronics industries are facing a crisis that may threaten their existence". The workshop had specific objectives to discuss mechanisms to improve areas such as reliability, yield, and performance while reducing failure rates, delivery times, and cost. The findings of the workshop addressed various aspects of the industry from wafers to parts to assemblies. Key problem areas that were singled out for attention are identified, and action items necessary to accomplish their resolution are recommended.

  13. Electric propulsion reliability: Statistical analysis of on-orbit anomalies and comparative analysis of electric versus chemical propulsion failure rates

    NASA Astrophysics Data System (ADS)

    Saleh, Joseph Homer; Geng, Fan; Ku, Michelle; Walker, Mitchell L. R.

    2017-10-01

    With a few hundred spacecraft launched to date with electric propulsion (EP), it is possible to conduct an epidemiological study of EP's on orbit reliability. The first objective of the present work was to undertake such a study and analyze EP's track record of on orbit anomalies and failures by different covariates. The second objective was to provide a comparative analysis of EP's failure rates with those of chemical propulsion. Satellite operators, manufacturers, and insurers will make reliability- and risk-informed decisions regarding the adoption and promotion of EP on board spacecraft. This work provides evidence-based support for such decisions. After a thorough data collection, 162 EP-equipped satellites launched between January 1997 and December 2015 were included in our dataset for analysis. Several statistical analyses were conducted, at the aggregate level and then with the data stratified by severity of the anomaly, by orbit type, and by EP technology. Mean Time To Anomaly (MTTA) and the distribution of the time to (minor/major) anomaly were investigated, as well as anomaly rates. The important findings in this work include the following: (1) Post-2005, EP's reliability has outperformed that of chemical propulsion; (2) Hall thrusters have robustly outperformed chemical propulsion, and they maintain a small but shrinking reliability advantage over gridded ion engines. Other results were also provided, for example the differentials in MTTA of minor and major anomalies for gridded ion engines and Hall thrusters. It was shown that: (3) Hall thrusters exhibit minor anomalies very early on orbit, which might be indicative of infant anomalies, and thus would benefit from better ground testing and acceptance procedures; (4) Strong evidence exists that EP anomalies (onset and likelihood) and orbit type are dependent, a dependence likely mediated by either the space environment or differences in thrusters duty cycles; (5) Gridded ion thrusters exhibit both

  14. A new electronic meter for measuring herbage yield

    Treesearch

    Donald L. Neal; Lee R. Neal

    1965-01-01

    A new electronic instrument, called the Heterodyne Vegetation Meter to measure herbage yield and utilization was built and tested. The instrument proved to be reliable and rapid. Further testing will be conducted.

  15. Reliability analysis of a sensitive and independent stabilometry parameter set

    PubMed Central

    Nagymáté, Gergely; Orlovits, Zsanett

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938

  16. Reliability analysis of a sensitive and independent stabilometry parameter set.

    PubMed

    Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.

  17. Linkage design effect on the reliability of surface-micromachined microengines driving a load

    NASA Astrophysics Data System (ADS)

    Tanner, Danelle M.; Peterson, Kenneth A.; Irwin, Lloyd W.; Tangyunyong, Paiboon; Miller, William M.; Eaton, William P.; Smith, Norman F.; Rodgers, M. Steven

    1998-09-01

    The reliability of microengines is a function of the design of the mechanical linkage used to connect the electrostatic actuator to the drive. We have completed a series of reliability stress tests on surface micromachined microengines driving an inertial load. In these experiments, we used microengines that had pin mechanisms with guides connecting the drive arms to the electrostatic actuators. Comparing this data to previous results using flexure linkages revealed that the pin linkage design was less reliable. The devices were stressed to failure at eight frequencies, both above and below the measured resonance frequency of the microengine. Significant amounts of wear debris were observed both around the hub and pin joint of the drive gear. Additionally, wear tracks were observed in the area where the moving shuttle rubbed against the guides of the pin linkage. At each frequency, we analyzed the statistical data yielding a lifetime (t50) for median cycles to failure and (sigma) , the shape parameter of the distribution. A model was developed to describe the failure data based on fundamental wear mechanisms and forces exhibited in mechanical resonant systems. The comparison to the model will be discussed.

  18. Software Reliability, Measurement, and Testing. Volume 2. Guidebook for Software Reliability Measurement and Testing

    DTIC Science & Technology

    1992-04-01

    contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more

  19. Validity and reliability of Persian version of Listening Styles Profile-Revised (LSP- R) in Iranian students.

    PubMed

    Fatehi, Zahra; Baradaran, Hamid Reza; Asadpour, Mohamad; Rezaeian, Mohsen

    2017-01-01

    Background: Individuals' listening styles differs based on their characters, professions and situations. This study aimed to assess the validity and reliability of Listening Styles Profile- Revised (LSP- R) in Iranian students. Methods: After translating into Persian, LSP-R was employed in a sample of 240 medical and nursing Persian speaking students in Iran. Statistical analysis was performed to test the reliability and validity of the LSP-R. Results: The study revealed high internal consistency and good test-retest reliability for the Persian version of the questionnaire. The Cronbach's alpha coefficient was 0.72 and intra-class correlation coefficient 0.87. The means for the content validity index and the content validity ratio (CVR) were 0.90 and 0.83, respectively. Exploratory factor analysis (EFA) yielded a four-factor solution accounted for 60.8% of the observed variance. Majority of medical students (73%) as well as majority of nursing students (70%) stated that their listening styles were task-oriented. Conclusion: In general, the study finding suggests that the Persian version of LSP-R is a valid and reliable instrument for assessing listening styles profile in the studied sample.

  20. Field design factors affecting the precision of ryegrass forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision and accuracy of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to ...

  1. Universal Recurrence Time Statistics of Characteristic Earthquakes

    NASA Astrophysics Data System (ADS)

    Goltz, C.; Turcotte, D. L.; Abaimov, S.; Nadeau, R. M.

    2006-12-01

    Characteristic earthquakes are defined to occur quasi-periodically on major faults. Do recurrence time statistics of such earthquakes follow a particular statistical distribution? If so, which one? The answer is fundamental and has important implications for hazard assessment. The problem cannot be solved by comparing the goodness of statistical fits as the available sequences are too short. The Parkfield sequence of M ≍ 6 earthquakes, one of the most extensive reliable data sets available, has grown to merely seven events with the last earthquake in 2004, for example. Recently, however, advances in seismological monitoring and improved processing methods have unveiled so-called micro-repeaters, micro-earthquakes which recur exactly in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Micro-repeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. Due to their recent discovery, however, available sequences contain less than 20 events at present. In this paper we present results for the analysis of recurrence times for several micro-repeater sequences from Parkfield and adjacent regions. To improve the statistical significance of our findings, we combine several sequences into one by rescaling the individual sets by their respective mean recurrence intervals and Weibull exponents. This novel approach of rescaled combination yields the most extensive data set possible. We find that the resulting statistics can be fitted well by an exponential distribution, confirming the universal applicability of the Weibull distribution to characteristic earthquakes. A similar result is obtained from rescaled combination, however, with regard to the lognormal distribution.

  2. A Statistical Method for Syntactic Dialectometry

    ERIC Educational Resources Information Center

    Sanders, Nathan C.

    2010-01-01

    This dissertation establishes the utility and reliability of a statistical distance measure for syntactic dialectometry, expanding dialectometry's methods to include syntax as well as phonology and the lexicon. It establishes the measure's reliability by comparing its results to those of dialectology and phonological dialectometry on Swedish…

  3. Reliability approach to rotating-component design. [fatigue life and stress concentration

    NASA Technical Reports Server (NTRS)

    Kececioglu, D. B.; Lalli, V. R.

    1975-01-01

    A probabilistic methodology for designing rotating mechanical components using reliability to relate stress to strength is explained. The experimental test machines and data obtained for steel to verify this methodology are described. A sample mechanical rotating component design problem is solved by comparing a deterministic design method with the new design-by reliability approach. The new method shows that a smaller size and weight can be obtained for specified rotating shaft life and reliability, and uses the statistical distortion-energy theory with statistical fatigue diagrams for optimum shaft design. Statistical methods are presented for (1) determining strength distributions for steel experimentally, (2) determining a failure theory for stress variations in a rotating shaft subjected to reversed bending and steady torque, and (3) relating strength to stress by reliability.

  4. Global Agriculture Yields and Conflict under Future Climate

    NASA Astrophysics Data System (ADS)

    Rising, J.; Cane, M. A.

    2013-12-01

    Aspects of climate have been shown to correlate significantly with conflict. We investigate a possible pathway for these effects through changes in agriculture yields, as predicted by field crop models (FAO's AquaCrop and DSSAT). Using satellite and station weather data, and surveyed data for soil and management, we simulate major crop yields across all countries between 1961 and 2008, and compare these to FAO and USDA reported yields. Correlations vary by country and by crop, from approximately .8 to -.5. Some of this range in crop model performance is explained by crop varieties, data quality, and other natural, economic, and political features. We also quantify the ability of AquaCrop and DSSAT to simulate yields under past cycles of ENSO as a proxy for their performance under changes in climate. We then describe two statistical models which relate crop yields to conflict events from the UCDP/PRIO Armed Conflict dataset. The first relates several preceding years of predicted yields of the major grain in each country to any conflict involving that country. The second uses the GREG ethnic group maps to identify differences in predicted yields between neighboring regions. By using variation in predicted yields to explain conflict, rather than actual yields, we can identify the exogenous effects of weather on conflict. Finally, we apply precipitation and temperature time-series under IPCC's A1B scenario to the statistical models. This allows us to estimate the scale of the impact of future yields on future conflict. Centroids of the major growing regions for each country's primary crop, based on USDA FAS consumption. Correlations between simulated yields and reported yields, for AquaCrop and DSSAT, under the assumption that no irrigation, fertilization, or pest control is used. Reported yields are the average of FAO yields and USDA FAS yields, where both are available.

  5. Improving the reliability of female fertility breeding values using type and milk yield traits that predict energy status in Australian Holstein cattle.

    PubMed

    González-Recio, O; Haile-Mariam, M; Pryce, J E

    2016-01-01

    The objectives of this study were (1) to propose changing the selection criteria trait for evaluating fertility in Australia from calving interval to conception rate at d 42 after the beginning of the mating season and (2) to use type traits as early fertility predictors, to increase the reliability of estimated breeding values for fertility. The breeding goal in Australia is conception within 6 wk of the start of the mating season. Currently, the Australian model to predict fertility breeding values (expressed as a linear transformation of calving interval) is a multitrait model that includes calving interval (CVI), lactation length (LL), calving to first service (CFS), first nonreturn rate (FNRR), and conception rate. However, CVI has a lower genetic correlation with the breeding goal (conception within 6 wk of the start of the mating season) than conception rate. Milk yield, type, and fertility data from 164,318 cow sired by 4,766 bulls were used. Principal component analysis and genetic correlation estimates between type and fertility traits were used to select type traits that could subsequently be used in a multitrait analysis. Angularity, foot angle, and pin set were chosen as type traits to include in an index with the traits that are included in the multitrait fertility model: CVI, LL, CFS, FNRR, and conception rate at d 42 (CR42). An index with these 8 traits is expected to achieve an average bull first proof reliability of 0.60 on the breeding objective (conception within 6 wk of the start of the mating season) compared with reliabilities of 0.39 and 0.45 for CR42 only or the current 5-trait Australian model. Subsequently, we used the first eigenvector of a principal component analysis with udder texture, bone quality, angularity, and body condition score to calculate an energy status indicator trait. The inclusion of the energy status indicator trait composite in a multitrait index with CVI, LL, CFS, FNRR, and CR42 achieved a 12-point increase in

  6. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  7. Statistical optimization of polysaccharide production by submerged cultivation of Lingzhi or Reishi medicinal mushroom, Ganoderma lucidum (W.Curt.: Fr.) P. Karst. MTCC 1039 (Aphyllophoromycetideae).

    PubMed

    Baskar, Gurunathan; Sathya, Shree Rajesh K Lakshmi Jai; Jinnah, Riswana Begum; Sahadevan, Renganathan

    2011-01-01

    Response surface methodology was employed to optimize the concentration of four important cultivation media components such as cottonseed oil cake, glucose, NH4Cl, and MgSO4 for maximum medicinal polysaccharide yield by Lingzhi or Reishi medicinal mushroom, Ganoderma lucidum MTCC 1039 in submerged culture. The second-order polynomial model describing the relationship between media components and polysaccharide yield was fitted in coded units of the variables. The higher value of the coefficient of determination (R2 = 0.953) justified an excellent correlation between media components and polysaccharide yield, and the model fitted well with high statistical reliability and significance. The predicted optimum concentration of the media components was 3.0% cottonseed oil cake, 3.0% glucose, 0.15% NH4Cl, and 0.045% MgSO4, with the maximum predicted polysaccharide yield of 819.76 mg/L. The experimental polysaccharide yield at the predicted optimum media components was 854.29 mg/L, which was 4.22% higher than the predicted yield.

  8. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  9. Highly reliable oxide VCSELs for datacom applications

    NASA Astrophysics Data System (ADS)

    Aeby, Ian; Collins, Doug; Gibson, Brian; Helms, Christopher J.; Hou, Hong Q.; Lou, Wenlin; Bossert, David J.; Wang, Charlie X.

    2003-06-01

    In this paper we describe the processes and procedures that have been developed to ensure high reliability for Emcore"s 850 nm oxide confined GaAs VCSELs. Evidence from on-going accelerated life testing and other reliability studies that confirm that this process yields reliable products will be discussed. We will present data and analysis techniques used to determine the activation energy and acceleration factors for the dominant wear-out failure mechanisms for our devices as well as our estimated MTTF of greater than 2 million use hours. We conclude with a summary of internal verification and field return rate validation data.

  10. Validity and Reliability of Accelerometers in Patients With COPD: A SYSTEMATIC REVIEW.

    PubMed

    Gore, Shweta; Blackwood, Jennifer; Guyette, Mary; Alsalaheen, Bara

    2018-05-01

    Reduced physical activity is associated with poor prognosis in chronic obstructive pulmonary disease (COPD). Accelerometers have greatly improved quantification of physical activity by providing information on step counts, body positions, energy expenditure, and magnitude of force. The purpose of this systematic review was to compare the validity and reliability of accelerometers used in patients with COPD. An electronic database search of MEDLINE and CINAHL was performed. Study quality was assessed with the Strengthening the Reporting of Observational Studies in Epidemiology checklist while methodological quality was assessed using the modified Quality Appraisal Tool for Reliability Studies. The search yielded 5392 studies; 25 met inclusion criteria. The SenseWear Pro armband reported high criterion validity under controlled conditions (r = 0.75-0.93) and high reliability (ICC = 0.84-0.86) for step counts. The DynaPort MiniMod demonstrated highest concurrent validity for step count using both video and manual methods. Validity of the SenseWear Pro armband varied between studies especially in free-living conditions, slower walking speeds, and with addition of weights during gait. A high degree of variability was found in the outcomes used and statistical analyses performed between studies, indicating a need for further studies to measure reliability and validity of accelerometers in COPD. The SenseWear Pro armband is the most commonly used accelerometer in COPD, but measurement properties are limited by gait speed variability and assistive device use. DynaPort MiniMod and Stepwatch accelerometers demonstrated high validity in patients with COPD but lack reliability data.

  11. An Exercise in Exploring Big Data for Producing Reliable Statistical Information.

    PubMed

    Rey-Del-Castillo, Pilar; Cardeñosa, Jesús

    2016-06-01

    The availability of copious data about many human, social, and economic phenomena is considered an opportunity for the production of official statistics. National statistical organizations and other institutions are more and more involved in new projects for developing what is sometimes seen as a possible change of paradigm in the way statistical figures are produced. Nevertheless, there are hardly any systems in production using Big Data sources. Arguments of confidentiality, data ownership, representativeness, and others make it a difficult task to get results in the short term. Using Call Detail Records from Ivory Coast as an illustration, this article shows some of the issues that must be dealt with when producing statistical indicators from Big Data sources. A proposal of a graphical method to evaluate one specific aspect of the quality of the computed figures is also presented, demonstrating that the visual insight provided improves the results obtained using other traditional procedures.

  12. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  13. Reliability Considerations for the Operation of Large Accelerator User Facilities

    DOE PAGES

    Willeke, F. J.

    2016-01-29

    The lecture provides an overview of considerations relevant for achieving highly reliable operation of accelerator based user facilities. The article starts with an overview of statistical reliability formalism which is followed by high reliability design considerations with examples. Finally, the article closes with operational aspects of high reliability such as preventive maintenance and spares inventory.

  14. [Reliability and reproducibility of the Fitzpatrick phototype scale for skin sensitivity to ultraviolet light].

    PubMed

    Sánchez, Guillermo; Nova, John; Arias, Nilsa; Peña, Bibiana

    2008-12-01

    The Fitzpatrick phototype scale has been used to determine skin sensitivity to ultraviolet light. The reliability of this scale in estimating sensitivity permits risk evaluation of skin cancer based on phototype. Reliability and changes in intra and inter-observer concordance was determined for the Fitzpatrick phototype scale after the assessment methods for establishing the phototype were standardized. An analytical study of intra and inter-observer concordance was performed. The Fitzpatrick phototype scale was standardized using focus group methodology. To determine intra and inter-observer agreement, the weighted kappa statistical method was applied. The standardization effect was measured using the equal kappa contrast hypothesis and Wald test for dependent measurements. The phototype scale was applied to 155 patients over 15 years of age who were assessed four times by two independent observers. The sample was drawn from patients of the Centro Dermatol6gico Federico Lleras Acosta. During the pre-standardization phase, the baseline and six-week inter-observer weighted kappa were 0.31 and 0.40, respectively. The intra-observer kappa values for observers A and B were 0.47 and 0.51, respectively. After the standardization process, the baseline and six-week inter-observer weighted kappa values were 0.77, and 0.82, respectively. Intra-observer kappa coefficients for observers A and B were 0.78 and 0.82. Statistically significant differences were found between coefficients before and after standardization (p<0.001) in all comparisons. Following a standardization exercise, the Fitzpatrick phototype scale yielded reliable, reproducible and consistent results.

  15. PCA as a practical indicator of OPLS-DA model reliability.

    PubMed

    Worley, Bradley; Powers, Robert

    Principal Component Analysis (PCA) and Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) are powerful statistical modeling tools that provide insights into separations between experimental groups based on high-dimensional spectral measurements from NMR, MS or other analytical instrumentation. However, when used without validation, these tools may lead investigators to statistically unreliable conclusions. This danger is especially real for Partial Least Squares (PLS) and OPLS, which aggressively force separations between experimental groups. As a result, OPLS-DA is often used as an alternative method when PCA fails to expose group separation, but this practice is highly dangerous. Without rigorous validation, OPLS-DA can easily yield statistically unreliable group separation. A Monte Carlo analysis of PCA group separations and OPLS-DA cross-validation metrics was performed on NMR datasets with statistically significant separations in scores-space. A linearly increasing amount of Gaussian noise was added to each data matrix followed by the construction and validation of PCA and OPLS-DA models. With increasing added noise, the PCA scores-space distance between groups rapidly decreased and the OPLS-DA cross-validation statistics simultaneously deteriorated. A decrease in correlation between the estimated loadings (added noise) and the true (original) loadings was also observed. While the validity of the OPLS-DA model diminished with increasing added noise, the group separation in scores-space remained basically unaffected. Supported by the results of Monte Carlo analyses of PCA group separations and OPLS-DA cross-validation metrics, we provide practical guidelines and cross-validatory recommendations for reliable inference from PCA and OPLS-DA models.

  16. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  17. A new statistical methodology predicting chip failure probability considering electromigration

    NASA Astrophysics Data System (ADS)

    Sun, Ted

    In this research thesis, we present a new approach to analyze chip reliability subject to electromigration (EM) whose fundamental causes and EM phenomenon happened in different materials are presented in this thesis. This new approach utilizes the statistical nature of EM failure in order to assess overall EM risk. It includes within-die temperature variations from the chip's temperature map extracted by an Electronic Design Automation (EDA) tool to estimate the failure probability of a design. Both the power estimation and thermal analysis are performed in the EDA flow. We first used the traditional EM approach to analyze the design with a single temperature across the entire chip that involves 6 metal and 5 via layers. Next, we used the same traditional approach but with a realistic temperature map. The traditional EM analysis approach and that coupled with a temperature map and the comparison between the results of considering and not considering temperature map are presented in in this research. A comparison between these two results confirms that using a temperature map yields a less pessimistic estimation of the chip's EM risk. Finally, we employed the statistical methodology we developed considering a temperature map and different use-condition voltages and frequencies to estimate the overall failure probability of the chip. The statistical model established considers the scaling work with the usage of traditional Black equation and four major conditions. The statistical result comparisons are within our expectations. The results of this statistical analysis confirm that the chip level failure probability is higher i) at higher use-condition frequencies for all use-condition voltages, and ii) when a single temperature instead of a temperature map across the chip is considered. In this thesis, I start with an overall review on current design types, common flows, and necessary verifications and reliability checking steps used in this IC design industry

  18. Yield: it's now an entitlement

    NASA Astrophysics Data System (ADS)

    George, Bill

    1994-09-01

    Only a few years ago, the primary method of cost reduction and productivity improvement in the semiconductor industry was increasing manufacturing yields throughout the process. Many of the remarkable reliability improvements realized over the past decade have come about as a result of actions that were originally taken primarily to improve device yields. Obviously, the practice of productivity improvement through yield enhancement is limited to the attainment of 100% yield, at which point some other mechanism must be employed. Traditionally, new products have been introduced to manufacturing at a point of relative immaturity, and semiconductor producers have relied on the traditional `learning curve' method of yield improvement to attain profitable levels of manufacturing yield. Recently, results of a survey of several fabs by a group of University of California at Berkeley researchers in the Competitive Semiconductor Manufacturing Program indicate that most factories learn at about the same rate after startup, in terms of both line yield and defectivity. If this is indeed generally true, then the most competitive factor is the one that starts with the highest yield, and it is difficult to displace a leader once his lead has been established. The two observations made above carry enormous implications for the semiconductor development or manufacturing professional. First, one must achieve very high yields in order to even play the game. Second, the achievement of competitive yields over time in the life of a factory is determined even before the factory is opened, in the planning and development phase. Third, and perhaps most uncomfortable for those of us who have relied on yield improvement as a cost driver, the winners of the nineties will find new levers to drive costs down, having already gotten the benefit of very high yield. This paper looks at the question of how the winners will achieve the critical measures of success, high initial yield and utilization

  19. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  20. Best Linear Unbiased Prediction (BLUP) for regional yield trials: a comparison to additive main effects and multiplicative interaction (AMMI) analysis.

    PubMed

    Piepho, H P

    1994-11-01

    Multilocation trials are often used to analyse the adaptability of genotypes in different environments and to find for each environment the genotype that is best adapted; i.e. that is highest yielding in that environment. For this purpose, it is of interest to obtain a reliable estimate of the mean yield of a cultivar in a given environment. This article compares two different statistical estimation procedures for this task: the Additive Main Effects and Multiplicative Interaction (AMMI) analysis and Best Linear Unbiased Prediction (BLUP). A modification of a cross validation procedure commonly used with AMMI is suggested for trials that are laid out as a randomized complete block design. The use of these procedure is exemplified using five faba bean datasets from German registration trails. BLUP was found to outperform AMMI in four of five faba bean datasets.

  1. Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  2. Methods for estimating selected low-flow statistics and development of annual flow-duration statistics for Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Kula, Stephanie P.

    2013-01-01

    This report presents the results of a study to develop methods for estimating selected low-flow statistics and for determining annual flow-duration statistics for Ohio streams. Regression techniques were used to develop equations for estimating 10-year recurrence-interval (10-percent annual-nonexceedance probability) low-flow yields, in cubic feet per second per square mile, with averaging periods of 1, 7, 30, and 90-day(s), and for estimating the yield corresponding to the long-term 80-percent duration flow. These equations, which estimate low-flow yields as a function of a streamflow-variability index, are based on previously published low-flow statistics for 79 long-term continuous-record streamgages with at least 10 years of data collected through water year 1997. When applied to the calibration dataset, average absolute percent errors for the regression equations ranged from 15.8 to 42.0 percent. The regression results have been incorporated into the U.S. Geological Survey (USGS) StreamStats application for Ohio (http://water.usgs.gov/osw/streamstats/ohio.html) in the form of a yield grid to facilitate estimation of the corresponding streamflow statistics in cubic feet per second. Logistic-regression equations also were developed and incorporated into the USGS StreamStats application for Ohio for selected low-flow statistics to help identify occurrences of zero-valued statistics. Quantiles of daily and 7-day mean streamflows were determined for annual and annual-seasonal (September–November) periods for each complete climatic year of streamflow-gaging station record for 110 selected streamflow-gaging stations with 20 or more years of record. The quantiles determined for each climatic year were the 99-, 98-, 95-, 90-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 25-, 20-, 10-, 5-, 2-, and 1-percent exceedance streamflows. Selected exceedance percentiles of the annual-exceedance percentiles were subsequently computed and tabulated to help facilitate consideration of the

  3. Probability interpretations of intraclass reliabilities.

    PubMed

    Ellis, Jules L

    2013-11-20

    Research where many organizations are rated by different samples of individuals such as clients, patients, or employees frequently uses reliabilities computed from intraclass correlations. Consumers of statistical information, such as patients and policy makers, may not have sufficient background for deciding which levels of reliability are acceptable. It is shown that the reliability is related to various probabilities that may be easier to understand, for example, the proportion of organizations that will be classed significantly above (or below) the mean and the probability that an organization is classed correctly given that it is classed significantly above (or below) the mean. One can view these probabilities as the amount of information of the classification and the correctness of the classification. These probabilities have an inverse relationship: given a reliability, one can 'buy' correctness at the cost of informativeness and conversely. This article discusses how this can be used to make judgments about the required level of reliabilities. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Computation of the Molenaar Sijtsma Statistic

    NASA Astrophysics Data System (ADS)

    Andries van der Ark, L.

    The Molenaar Sijtsma statistic is an estimate of the reliability of a test score. In some special cases, computation of the Molenaar Sijtsma statistic requires provisional measures. These provisional measures have not been fully described in the literature, and we show that they have not been implemented in the software. We describe the required provisional measures as to allow the computation of the Molenaar Sijtsma statistic for all data sets.

  5. Spatial variability effects on precision and power of forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Spatial analyses of yield trials are important, as they adjust cultivar means for spatial variation and improve the statistical precision of yield estimation. While the relative efficiency of spatial analysis has been frequently reported in several yield trials, its application on long-term forage y...

  6. Statistical Considerations in Choosing a Test Reliability Coefficient. ACT Research Report Series, 2012 (10)

    ERIC Educational Resources Information Center

    Woodruff, David; Wu, Yi-Fang

    2012-01-01

    The purpose of this paper is to illustrate alpha's robustness and usefulness, using actual and simulated educational test data. The sampling properties of alpha are compared with the sampling properties of several other reliability coefficients: Guttman's lambda[subscript 2], lambda[subscript 4], and lambda[subscript 6]; test-retest reliability;…

  7. The alfalfa yield gap: A review of the evidence

    USDA-ARS?s Scientific Manuscript database

    Knowledge of feasibly attainable crop yields is needed for many purposes, from field-scale management to national policy decisions. For alfalfa (Medicago sativa L.), the most widely used estimates of yield in the US are whole-farm reports from the National Agriculture Statistics Service, which are b...

  8. Computing Reliabilities Of Ceramic Components Subject To Fracture

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.

    1992-01-01

    CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.

  9. Assessment of NDE Reliability Data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.

    1976-01-01

    Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

  10. The kappa statistic in rehabilitation research: an examination.

    PubMed

    Tooth, Leigh R; Ottenbacher, Kenneth J

    2004-08-01

    The number and sophistication of statistical procedures reported in medical rehabilitation research is increasing. Application of the principles and methods associated with evidence-based practice has contributed to the need for rehabilitation practitioners to understand quantitative methods in published articles. Outcomes measurement and determination of reliability are areas that have experienced rapid change during the past decade. In this study, distinctions between reliability and agreement are examined. Information is presented on analytical approaches for addressing reliability and agreement with the focus on the application of the kappa statistic. The following assumptions are discussed: (1) kappa should be used with data measured on a categorical scale, (2) the patients or objects categorized should be independent, and (3) the observers or raters must make their measurement decisions and judgments independently. Several issues related to using kappa in measurement studies are described, including use of weighted kappa, methods of reporting kappa, the effect of bias and prevalence on kappa, and sample size and power requirements for kappa. The kappa statistic is useful for assessing agreement among raters, and it is being used more frequently in rehabilitation research. Correct interpretation of the kappa statistic depends on meeting the required assumptions and accurate reporting.

  11. Covariance Matrix Evaluations for Independent Mass Fission Yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less

  12. Statistical Analysis on the Mechanical Properties of Magnesium Alloys

    PubMed Central

    Liu, Ruoyu; Jiang, Xianquan; Zhang, Hongju; Zhang, Dingfei; Wang, Jingfeng; Pan, Fusheng

    2017-01-01

    Knowledge of statistical characteristics of mechanical properties is very important for the practical application of structural materials. Unfortunately, the scatter characteristics of magnesium alloys for mechanical performance remain poorly understood until now. In this study, the mechanical reliability of magnesium alloys is systematically estimated using Weibull statistical analysis. Interestingly, the Weibull modulus, m, of strength for magnesium alloys is as high as that for aluminum and steels, confirming the very high reliability of magnesium alloys. The high predictability in the tensile strength of magnesium alloys represents the capability of preventing catastrophic premature failure during service, which is essential for safety and reliability assessment. PMID:29113116

  13. Reliable and valid tools for measuring surgeons' teaching performance: residents' vs. self evaluation.

    PubMed

    Boerebach, Benjamin C M; Arah, Onyebuchi A; Busch, Olivier R C; Lombarts, Kiki M J M H

    2012-01-01

    In surgical education, there is a need for educational performance evaluation tools that yield reliable and valid data. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance. We investigated (1) the reliability and validity of 2 tools for evaluating the teaching performance of attending surgeons in residency training programs, and (2) whether surgeons' self evaluation correlated with the residents' evaluation of those surgeons. We surveyed 343 surgeons and 320 residents as part of a multicenter prospective cohort study of faculty teaching performance in residency training programs. The reliability and validity of the SETQ (System for Evaluation Teaching Qualities) tools were studied using standard psychometric techniques. We then estimated the correlations between residents' and surgeons' evaluations. The response rate was 87% among surgeons and 84% among residents, yielding 2625 residents' evaluations and 302 self evaluations. The SETQ tools yielded reliable and valid data on 5 domains of surgical teaching performance, namely, learning climate, professional attitude towards residents, communication of goals, evaluation of residents, and feedback. The correlations between surgeons' self and residents' evaluations were low, with coefficients ranging from 0.03 for evaluation of residents to 0.18 for communication of goals. The SETQ tools for the evaluation of surgeons' teaching performance appear to yield reliable and valid data. The lack of strong correlations between surgeons' self and residents' evaluations suggest the need for using external feedback sources in informed self evaluation of surgeons. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  14. An Innovative Excel Application to Improve Exam Reliability in Marketing Courses

    ERIC Educational Resources Information Center

    Keller, Christopher M.; Kros, John F.

    2011-01-01

    Measures of survey reliability are commonly addressed in marketing courses. One statistic of reliability is "Cronbach's alpha." This paper presents an application of survey reliability as a reflexive application of multiple-choice exam validation. The application provides an interactive decision support system that incorporates survey item…

  15. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  16. Investigation of improving MEMS-type VOA reliability

    NASA Astrophysics Data System (ADS)

    Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.

    2003-12-01

    MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).

  17. Investigation of improving MEMS-type VOA reliability

    NASA Astrophysics Data System (ADS)

    Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.

    2004-01-01

    MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).

  18. Development of LACIE CCEA-1 weather/wheat yield models. [regression analysis

    NASA Technical Reports Server (NTRS)

    Strommen, N. D.; Sakamoto, C. M.; Leduc, S. K.; Umberger, D. E. (Principal Investigator)

    1979-01-01

    The advantages and disadvantages of the casual (phenological, dynamic, physiological), statistical regression, and analog approaches to modeling for grain yield are examined. Given LACIE's primary goal of estimating wheat production for the large areas of eight major wheat-growing regions, the statistical regression approach of correlating historical yield and climate data offered the Center for Climatic and Environmental Assessment the greatest potential return within the constraints of time and data sources. The basic equation for the first generation wheat-yield model is given. Topics discussed include truncation, trend variable, selection of weather variables, episodic events, strata selection, operational data flow, weighting, and model results.

  19. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  20. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  1. Diverse Data Sets Can Yield Reliable Information through Mechanistic Modeling: Salicylic Acid Clearance.

    PubMed

    Raymond, G M; Bassingthwaighte, J B

    This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model

  2. Quantifying yield gaps in wheat production in Russia

    NASA Astrophysics Data System (ADS)

    Schierhorn, Florian; Faramarzi, Monireh; Prishchepov, Alexander V.; Koch, Friedrich J.; Müller, Daniel

    2014-08-01

    Crop yields must increase substantially to meet the increasing demands for agricultural products. Crop yield increases are particularly important for Russia because low crop yields prevail across Russia’s widespread and fertile land resources. However, reliable data are lacking regarding the spatial distribution of potential yields in Russia, which can be used to determine yield gaps. We used a crop growth model to determine the yield potentials and yield gaps of winter and spring wheat at the provincial level across European Russia. We modeled the annual yield potentials from 1995 to 2006 with optimal nitrogen supplies for both rainfed and irrigated conditions. Overall, the results suggest yield gaps of 1.51-2.10 t ha-1, or 44-52% of the yield potential under rainfed conditions. Under irrigated conditions, yield gaps of 3.14-3.30 t ha-1, or 62-63% of the yield potential, were observed. However, recurring droughts cause large fluctuations in yield potentials under rainfed conditions, even when the nitrogen supply is optimal, particularly in the highly fertile black soil areas of southern European Russia. The highest yield gaps (up to 4 t ha-1) under irrigated conditions were detected in the steppe areas in southeastern European Russia along the border of Kazakhstan. Improving the nutrient and water supply and using crop breeds that are adapted to the frequent drought conditions are important for reducing yield gaps in European Russia. Our regional assessment helps inform policy and agricultural investors and prioritize research that aims to increase crop production in this important region for global agricultural markets.

  3. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  4. Evaluation of the CEAS trend and monthly weather data models for soybean yields in Iowa, Illinois, and Indiana

    NASA Technical Reports Server (NTRS)

    French, V. (Principal Investigator)

    1982-01-01

    The CEAS models evaluated use historic trend and meteorological and agroclimatic variables to forecast soybean yields in Iowa, Illinois, and Indiana. Indicators of yield reliability and current measures of modeled yield reliability were obtained from bootstrap tests on the end of season models. Indicators of yield reliability show that the state models are consistently better than the crop reporting district (CRD) models. One CRD model is especially poor. At the state level, the bias of each model is less than one half quintal/hectare. The standard deviation is between one and two quintals/hectare. The models are adequate in terms of coverage and are to a certain extent consistent with scientific knowledge. Timely yield estimates can be made during the growing season using truncated models. The models are easy to understand and use and are not costly to operate. Other than the specification of values used to determine evapotranspiration, the models are objective. Because the method of variable selection used in the model development is adequately documented, no evaluation can be made of the objectivity and cost of redevelopment of the model.

  5. Heterogeneous variances in multi-environment yield trials for corn hybrids

    USDA-ARS?s Scientific Manuscript database

    Recent developments in statistics and computing have enabled much greater levels of complexity in statistical models of multi-environment yield trial data. One particular feature of interest to breeders is simultaneously modeling heterogeneity of variances among environments and cultivars. Our obj...

  6. Climate Variability and Sugarcane Yield in Louisiana.

    NASA Astrophysics Data System (ADS)

    Greenland, David

    2005-11-01

    This paper seeks to understand the role that climate variability has on annual yield of sugarcane in Louisiana. Unique features of sugarcane growth in Louisiana and nonclimatic, yield-influencing factors make this goal an interesting and challenging one. Several methods of seeking and establishing the relations between yield and climate variables are employed. First, yield climate relations were investigated at a single research station where crop variety and growing conditions could be held constant and yield relations could be established between a predominant older crop variety and a newer one. Interviews with crop experts and a literature survey were used to identify potential climatic factors that control yield. A statistical analysis was performed using statewide yield data from the American Sugar Cane League from 1963 to 2002 and a climate database. Yield values for later years were adjusted downward to form an adjusted yield dataset. The climate database was principally constructed from daily and monthly values of maximum and minimum temperature and daily and monthly total precipitation for six cooperative weather-reporting stations representative of the area of sugarcane production. The influence of 74 different, though not independent, climate-related variables on sugarcane yield was investigated. The fact that a climate signal exists is demonstrated by comparing mean values of the climate variables corresponding to the upper and lower third of adjusted yield values. Most of these mean-value differences show an intuitively plausible difference between the high- and low-yield years. The difference between means of the climate variables for years corresponding to the upper and lower third of annual yield values for 13 of the variables is statistically significant at or above the 90% level. A correlation matrix was used to identify the variables that had the largest influence on annual yield. Four variables [called here critical climatic variables (CCV

  7. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    PubMed

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  8. Further Examination of the Reliability of the Modified Rathus Assertiveness Schedule.

    ERIC Educational Resources Information Center

    Del Greco, Linda; And Others

    1986-01-01

    Examined the reliability of the 30-item Modified Rathus Assertiveness Schedule (MRAS) using the test-retest method over a three-week period. The MRAS yielded correlations of .74 using the Pearson product and Spearman Brown correlation coefficient. Correlations for males yielded .77 and .72. For females correlations for both tests were .72.…

  9. Characterizing bias correction uncertainty in wheat yield predictions

    NASA Astrophysics Data System (ADS)

    Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam

    2017-04-01

    Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield

  10. Reliability analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Kan, Han-Pin

    1992-01-01

    A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.

  11. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  12. Claims about the Reliability of Student Evaluations of Instruction: The Ecological Fallacy Rides Again

    ERIC Educational Resources Information Center

    Morley, Donald D.

    2012-01-01

    The vast majority of the research on student evaluation of instruction has assessed the reliability of groups of courses and yielded either a single reliability coefficient for the entire group, or grouped reliability coefficients for each student evaluation of teaching (SET) item. This manuscript argues that these practices constitute a form of…

  13. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the

  14. Patients and medical statistics. Interest, confidence, and ability.

    PubMed

    Woloshin, Steven; Schwartz, Lisa M; Welch, H Gilbert

    2005-11-01

    People are increasingly presented with medical statistics. There are no existing measures to assess their level of interest or confidence in using medical statistics. To develop 2 new measures, the STAT-interest and STAT-confidence scales, and assess their reliability and validity. Survey with retest after approximately 2 weeks. Two hundred and twenty-four people were recruited from advertisements in local newspapers, an outpatient clinic waiting area, and a hospital open house. We developed and revised 5 items on interest in medical statistics and 3 on confidence understanding statistics. Study participants were mostly college graduates (52%); 25% had a high school education or less. The mean age was 53 (range 20 to 84) years. Most paid attention to medical statistics (6% paid no attention). The mean (SD) STAT-interest score was 68 (17) and ranged from 15 to 100. Confidence in using statistics was also high: the mean (SD) STAT-confidence score was 65 (19) and ranged from 11 to 100. STAT-interest and STAT-confidence scores were moderately correlated (r=.36, P<.001). Both scales demonstrated good test-retest repeatability (r=.60, .62, respectively), internal consistency reliability (Cronbach's alpha=0.70 and 0.78), and usability (individual item nonresponse ranged from 0% to 1.3%). Scale scores correlated only weakly with scores on a medical data interpretation test (r=.15 and .26, respectively). The STAT-interest and STAT-confidence scales are usable and reliable. Interest and confidence were only weakly related to the ability to actually use data.

  15. A Priori Estimation of Organic Reaction Yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emami, Fateme S.; Vahid, Amir; Wylie, Elizabeth K.

    2015-07-21

    A thermodynamically guided calculation of free energies of substrate and product molecules allows for the estimation of the yields of organic reactions. The non-ideality of the system and the solvent effects are taken into account through the activity coefficients calculated at the molecular level by perturbed-chain statistical associating fluid theory (PC-SAFT). The model is iteratively trained using a diverse set of reactions with yields that have been reported previously. This trained model can then estimate a priori the yields of reactions not included in the training set with an accuracy of ca. ±15 %. This ability has the potential tomore » translate into significant economic savings through the selection and then execution of only those reactions that can proceed in good yields.« less

  16. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  17. A Monte Carlo Simulation Study of the Reliability of Intraindividual Variability

    PubMed Central

    Estabrook, Ryne; Grimm, Kevin J.; Bowles, Ryan P.

    2012-01-01

    Recent research has seen intraindividual variability (IIV) become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. IIV as measured by individual standard deviations (ISDs) has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD compared to the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. PMID:22268793

  18. Crop Yield Predictions - High Resolution Statistical Model for Intra-season Forecasts Applied to Corn in the US

    NASA Astrophysics Data System (ADS)

    Cai, Y.

    2017-12-01

    Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.

  19. Midwest Agriculture: A comparison of AVHRR NDVI3g data and crop yields in Corn Belt region of the United States from 1982 to 2014

    NASA Astrophysics Data System (ADS)

    Glennie, E.; Anyamba, A.; Eastman, R.

    2016-12-01

    A time series of Advanced Very High Resolution Radiometer (AVHRR) derived normalized difference vegetation index (NDVI) images was compared to National Agricultural Statistics Service (NASS) corn yield data in the Corn Belt of the United States from 1982 to 2014. The relationship between NDVI and crop yields under El Nino, neutral, and La Nina conditions was used to assess 1) the reliability of using NDVI as an indicator of crop productivity, and 2) the response of the Corn Belt to El Nino/ Southern Oscillation (ENSO) teleconnection effects. First, bi-monthly NDVI data were combined into monthly data using the maximum value compositing technique to reduce cloud contamination and other effects. The most representative seasonal curve of NDVI values over the course of the study period was extracted to define the growing season in the region - May to October. Standardized NDVI anomalies were calculated and averaged to produce a growing season NDVI metrics to represent each Agricultural Statistics Division (ASD) for each year in the study period. The corn yields were detrended in order to remove effects of technological advancements on crop productivity (use of genetically modified seeds, fertilizer, herbicides). Correlation (R) values between the NDVI anomalies and detrended corn yields varied across the Corn Belt, with a maximum of 0.81 and mean of 0.49. While corn is the dominant crop in the region, some inconsistencies between corn yield and NDVI may be accounted for by an increase in soy yield for a given year due to crop rotation practices. The 10 El Nino events and 9 La Nina events that occurred between 1982 and 2014 are not reflected in a consistent manner in NDVI or corn yield data. However, composites of NDVI and crop yields for all El Nino events indicate there is a tendency for higher than normal NDVI and increased corn yields. Conversely, the composite crop yield image for La Nina events shows a slight decrease in productivity.

  20. A statistical approach to instrument calibration

    Treesearch

    Robert R. Ziemer; David Strauss

    1978-01-01

    Summary - It has been found that two instruments will yield different numerical values when used to measure identical points. A statistical approach is presented that can be used to approximate the error associated with the calibration of instruments. Included are standard statistical tests that can be used to determine if a number of successive calibrations of the...

  1. Yield gap mapping as a support tool for risk management in agriculture

    NASA Astrophysics Data System (ADS)

    Lahlou, Ouiam; Imani, Yasmina; Slimani, Imane; Van Wart, Justin; Yang, Haishun

    2016-04-01

    The increasing frequency and magnitude of droughts in Morocco and the mounting losses from extended droughts in the agricultural sector emphasized the need to develop reliable and timely tools to manage drought and to mitigate resulting catastrophic damage. In 2011, Morocco launched a cereals multi-risk insurance with drought as the most threatening and the most frequent hazard in the country. However, and in order to assess the gap and to implement the more suitable compensation, it is essential to quantify the potential yield in each area. In collaboration with the University of Nebraska-Lincoln, a study is carried out in Morocco and aims to determine the yield potentials and the yield gaps in the different agro-climatic zones of the country. It fits into the large project: Global Yield Gap and Water Productivity Atlas: http://www.yieldgap.org/. The yield gap (Yg) is the magnitude and difference between crop yield potential (Yp) or water limited yield potential (Yw) and actual yields, reached by farmers. World Food Studies (WOFOST), which is a Crop simulation mechanistic model, has been used for this purpose. Prior to simulations, reliable information about actual yields, weather data, crop management data and soil data have been collected in 7 Moroccan buffer zones considered, each, within a circle of 100 km around a weather station point, homogenously spread across the country and where cereals are widely grown. The model calibration was also carried out using WOFOST default varieties data. The map-based results represent a robust tool, not only for drought insurance organization, but for agricultural and agricultural risk management. Moreover, accurate and geospatially granular estimates of Yg and Yw will allow to focus on regions with largest unexploited yield gaps and greatest potential to close them, and consequently to improve food security in the country.

  2. Evaluation of Thompson-type trend and monthly weather data models for corn yields in Iowa, Illinois, and Indiana

    NASA Technical Reports Server (NTRS)

    French, V. (Principal Investigator)

    1982-01-01

    An evaluation was made of Thompson-Type models which use trend terms (as a surrogate for technology), meteorological variables based on monthly average temperature, and total precipitation to forecast and estimate corn yields in Iowa, Illinois, and Indiana. Pooled and unpooled Thompson-type models were compared. Neither was found to be consistently superior to the other. Yield reliability indicators show that the models are of limited use for large area yield estimation. The models are objective and consistent with scientific knowledge. Timely yield forecasts and estimates can be made during the growing season by using normals or long range weather forecasts. The models are not costly to operate and are easy to use and understand. The model standard errors of prediction do not provide a useful current measure of modeled yield reliability.

  3. What Your Yield Says about You

    ERIC Educational Resources Information Center

    Hoover, Eric

    2009-01-01

    The recession has turned Americans into numbers addicts. Seemingly endless supplies of statistics--stock prices, retail sales, and the gross domestic product--offer various views about the health of the nation's economy. Higher education has its own economic indicators. Among the most important is "yield," the percentage of admitted students who…

  4. A Remote Sensing-Derived Corn Yield Assessment Model

    NASA Astrophysics Data System (ADS)

    Shrestha, Ranjay Man

    Agricultural studies and food security have become critical research topics due to continuous growth in human population and simultaneous shrinkage in agricultural land. In spite of modern technological advancements to improve agricultural productivity, more studies on crop yield assessments and food productivities are still necessary to fulfill the constantly increasing food demands. Besides human activities, natural disasters such as flood and drought, along with rapid climate changes, also inflect an adverse effect on food productivities. Understanding the impact of these disasters on crop yield and making early impact estimations could help planning for any national or international food crisis. Similarly, the United States Department of Agriculture (USDA) Risk Management Agency (RMA) insurance management utilizes appropriately estimated crop yield and damage assessment information to sustain farmers' practice through timely and proper compensations. Through County Agricultural Production Survey (CAPS), the USDA National Agricultural Statistical Service (NASS) uses traditional methods of field interviews and farmer-reported survey data to perform annual crop condition monitoring and production estimations at the regional and state levels. As these manual approaches of yield estimations are highly inefficient and produce very limited samples to represent the entire area, NASS requires supplemental spatial data that provides continuous and timely information on crop production and annual yield. Compared to traditional methods, remote sensing data and products offer wider spatial extent, more accurate location information, higher temporal resolution and data distribution, and lower data cost--thus providing a complementary option for estimation of crop yield information. Remote sensing derived vegetation indices such as Normalized Difference Vegetation Index (NDVI) provide measurable statistics of potential crop growth based on the spectral reflectance and could

  5. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  6. The Future of Statistical Software. Proceedings of a Forum--Panel on Guidelines for Statistical Software (Washington, D.C., February 22, 1991).

    ERIC Educational Resources Information Center

    National Academy of Sciences - National Research Council, Washington, DC.

    The Panel on Guidelines for Statistical Software was organized in 1990 to document, assess, and prioritize problem areas regarding quality and reliability of statistical software; present prototype guidelines in high priority areas; and make recommendations for further research and discussion. This document provides the following papers presented…

  7. Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2009-01-01

    A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…

  8. Direct Statistical Simulation of Astrophysical and Geophysical Flows

    NASA Astrophysics Data System (ADS)

    Marston, B.; Tobias, S.

    2011-12-01

    Astrophysical and geophysical flows are amenable to direct statistical simulation (DSS), the calculation of statistical properties that does not rely upon accumulation by direct numerical simulation (DNS) (Tobias and Marston, 2011). Anisotropic and inhomogeneous flows, such as those found in the atmospheres of planets, in rotating stars, and in disks, provide the starting point for an expansion in fluctuations about the mean flow, leading to a hierarchy of equations of motion for the equal-time cumulants. The method is described for a general set of evolution equations, and then illustrated for two specific cases: (i) A barotropic jet on a rotating sphere (Marston, Conover, and Schneider, 2008); and (ii) A model of a stellar tachocline driven by relaxation to an underlying flow with shear (Cally 2001) for which a joint instability arises from the combination of shearing forces and magnetic stress. The reliability of DSS is assessed by comparing statistics so obtained against those accumulated from DNS, the traditional approach. The simplest non-trivial closure, CE2, sets the third and higher cumulants to zero yet yields qualitatively accurate low-order statistics for both systems. Physically CE2 retains only the eddy-mean flow interaction, and drops the eddy-eddy interaction. Quantitatively accurate zonal means are found for barotropic jet for long and short (but not intermediate) relaxation times, and for Cally problem in the case of strong shearing and large magnetic fields. Deficiencies in CE2 can be repaired at the CE3 level, that is by retaining the third cumulant (Marston 2011). We conclude by discussing possible extensions of the method both in terms of computational methods and the range of astrophysical and geophysical problems that are of interest.

  9. Measuring the food and built environments in urban centres: reliability and validity of the EURO-PREVOB Community Questionnaire.

    PubMed

    Pomerleau, J; Knai, C; Foster, C; Rutter, H; Darmon, N; Derflerova Brazdova, Z; Hadziomeragic, A F; Pekcan, G; Pudule, I; Robertson, A; Brunner, E; Suhrcke, M; Gabrijelcic Blenkus, M; Lhotska, L; Maiani, G; Mistura, L; Lobstein, T; Martin, B W; Elinder, L S; Logstrup, S; Racioppi, F; McKee, M

    2013-03-01

    The authors designed an instrument to measure objectively aspects of the built and food environments in urban areas, the EURO-PREVOB Community Questionnaire, within the EU-funded project 'Tackling the social and economic determinants of nutrition and physical activity for the prevention of obesity across Europe' (EURO-PREVOB). This paper describes its development, reliability, validity, feasibility and relevance to public health and obesity research. The Community Questionnaire is designed to measure key aspects of the food and built environments in urban areas of varying levels of affluence or deprivation, within different countries. The questionnaire assesses (1) the food environment and (2) the built environment. Pilot tests of the EURO-PREVOB Community Questionnaire were conducted in five to 10 purposively sampled urban areas of different socio-economic status in each of Ankara, Brno, Marseille, Riga, and Sarajevo. Inter-rater reliability was compared between two pairs of fieldworkers in each city centre using three methods: inter-observer agreement (IOA), kappa statistics, and intraclass correlation coefficients (ICCs). Data were collected successfully in all five cities. Overall reliability of the EURO-PREVOB Community Questionnaire was excellent (inter-observer agreement (IOA) > 0.87; intraclass correlation coefficients (ICC)s > 0.91 and kappa statistics > 0.7. However, assessment of certain aspects of the quality of the built environment yielded slightly lower IOA coefficients than the quantitative aspects. The EURO-PREVOB Community Questionnaire was found to be a reliable and practical observational tool for measuring differences in community-level data on environmental factors that can impact on dietary intake and physical activity. The next step is to evaluate its predictive power by collecting behavioural and anthropometric data relevant to obesity and its determinants. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd

  10. Reliability of a Measure of Institutional Discrimination against Minorities

    DTIC Science & Technology

    1979-12-01

    samples are presented. The first is based upon classical statistical theory and the second derives from a series of computer-generated Monte Carlo...Institutional racism and sexism . Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1978. Hays, W. L. and Winkler, R. L. Statistics : probability, inference... statistical measure of the e of institutional discrimination are discussed. Two methods of dealing with the problem of reliability of the measure in small

  11. Compression of freestanding gold nanostructures: from stochastic yield to predictable flow

    NASA Astrophysics Data System (ADS)

    Mook, W. M.; Niederberger, C.; Bechelany, M.; Philippe, L.; Michler, J.

    2010-02-01

    Characterizing the mechanical response of isolated nanostructures is vitally important to fields such as microelectromechanical systems (MEMS) where the behaviour of nanoscale contacts can in large part determine system reliability and lifetime. To address this challenge directly, single crystal gold nanodots are compressed inside a high resolution scanning electron microscope (SEM) using a nanoindenter equipped with a flat punch tip. These structures load elastically, and then yield in a stochastic manner, at loads ranging from 16 to 110 µN, which is up to five times higher than the load necessary for flow after yield. Yielding is immediately followed by displacement bursts equivalent to 1-50% of the initial height, depending on the yield point. During the largest displacement bursts, strain energy within the structure is released while new surface area is created in the form of localized slip bands, which are evident in both the SEM movies and still-images. A first order estimate of the apparent energy release rate, in terms of fracture mechanics concepts, for bursts representing 5-50% of the structure's initial height is on the order of 10-100 J m-2, which is approximately two orders of magnitude lower than bulk values. Once this initial strain burst during yielding has occurred, the structures flow in a ductile way. The implications of this behaviour, which is analogous to a brittle to ductile transition, are discussed with respect to mechanical reliability at the micro- and nanoscales.

  12. Students' attitudes towards learning statistics

    NASA Astrophysics Data System (ADS)

    Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah

    2015-05-01

    Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.

  13. Increased Reliability for Single-Case Research Results: Is the Bootstrap the Answer?

    ERIC Educational Resources Information Center

    Parker, Richard I.

    2006-01-01

    There is need for objective and reliable single-case research (SCR) results in the movement toward evidence-based interventions (EBI), for inclusion in meta-analyses, and for funding accountability in clinical contexts. Yet SCR deals with data that often do not conform to parametric data assumptions and that yield results of low reliability. A…

  14. Statistical modeling of yield and variance instability in conventional and organic cropping systems

    USDA-ARS?s Scientific Manuscript database

    Cropping systems research was undertaken to address declining crop diversity and verify competitiveness of alternatives to the predominant conventional cropping system in the northern Corn Belt. To understand and capitalize on temporal yield variability within corn and soybean fields, we quantified ...

  15. Statistical Evaluation of VIIRS Ocean Color Products

    NASA Astrophysics Data System (ADS)

    Mikelsons, K.; Wang, M.; Jiang, L.

    2016-02-01

    Evaluation and validation of satellite-derived ocean color products is a complicated task, which often relies on precise in-situ measurements for satellite data quality assessment. However, in-situ measurements are only available in comparatively few locations, expensive, and not for all times. In the open ocean, the variability in spatial and temporal scales is longer, and the water conditions are generally more stable. We use this fact to perform extensive statistical evaluations of consistency for ocean color retrievals based on comparison of retrieved data at different times, and corresponding to various retrieval parameters. We have used the NOAA Multi-Sensor Level-1 to Level-2 (MSL12) ocean color data processing system for ocean color product data derived from the Visible Infrared Imaging Radiometer Suite (VIIRS). We show the results for statistical dependence of normalized water-leaving radiance spectra with respect to various parameters of retrieval geometry, such as solar- and sensor-zenith angles, as well as physical variables, such as wind speed, air pressure, ozone amount, water vapor, etc. In most cases, the results show consistent retrievals within the relevant range of retrieval parameters, showing a good performance with the MSL12 in the open ocean. The results also yield the upper bounds of solar- and sensor-zenith angles for reliable ocean color retrievals, and also show a slight increase of VIIRS-derived normalized water-leaving radiances with wind speed and water vapor concentration.

  16. Topics in Measurement: Reliability and Validity.

    ERIC Educational Resources Information Center

    Dick, Walter; Hagerty, Nancy

    This text was developed on an autoinstructional basis to familiarize the reader with the various interpretations of reliability and validity, their measurement and evaluation, and factors influencing their measurement. The text enables those with prior knowledge of statistics to increase their understanding of variance and correlation. Review…

  17. Reliability assessments in qualitative health promotion research.

    PubMed

    Cook, Kay E

    2012-03-01

    This article contributes to the debate about the use of reliability assessments in qualitative research in general, and health promotion research in particular. In this article, I examine the use of reliability assessments in qualitative health promotion research in response to health promotion researchers' commonly held misconception that reliability assessments improve the rigor of qualitative research. All qualitative articles published in the journal Health Promotion International from 2003 to 2009 employing reliability assessments were examined. In total, 31.3% (20/64) articles employed some form of reliability assessment. The use of reliability assessments increased over the study period, ranging from <20% in 2003/2004 to 50% and above in 2008/2009, while at the same time the total number of qualitative articles decreased. The articles were then classified into four types of reliability assessments, including the verification of thematic codes, the use of inter-rater reliability statistics, congruence in team coding and congruence in coding across sites. The merits of each type were discussed, with the subsequent discussion focusing on the deductive nature of reliable thematic coding, the limited depth of immediately verifiable data and the usefulness of such studies to health promotion and the advancement of the qualitative paradigm.

  18. Global Crop Yields, Climatic Trends and Technology Enhancement

    NASA Astrophysics Data System (ADS)

    Najafi, E.; Devineni, N.; Khanbilvardi, R.; Kogan, F.

    2016-12-01

    During the last decades the global agricultural production has soared up and technology enhancement is still making positive contribution to yield growth. However, continuing population, water crisis, deforestation and climate change threaten the global food security. Attempts to predict food availability in the future around the world can be partly understood from the impact of changes to date. A new multilevel model for yield prediction at the country scale using climate covariates and technology trend is presented in this paper. The structural relationships between average yield and climate attributes as well as trends are estimated simultaneously. All countries are modeled in a single multilevel model with partial pooling and/or clustering to automatically group and reduce estimation uncertainties. El Niño Southern Oscillation (ENSO), Palmer Drought Severity Index (PDSI), Geopotential height (GPH), historical CO2 level and time-trend as a relatively reliable approximation of technology measurement are used as predictors to estimate annual agricultural crop yields for each country from 1961 to 2007. Results show that these indicators can explain the variability in historical crop yields for most of the countries and the model performs well under out-of-sample verifications.

  19. Accuracy assessment: The statistical approach to performance evaluation in LACIE. [Great Plains corridor, United States

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.

  20. Relating design and environmental variables to reliability

    NASA Astrophysics Data System (ADS)

    Kolarik, William J.; Landers, Thomas L.

    The combination of space application and nuclear power source demands high reliability hardware. The possibilities of failure, either an inability to provide power or a catastrophic accident, must be minimized. Nuclear power experiences on the ground have led to highly sophisticated probabilistic risk assessment procedures, most of which require quantitative information to adequately assess such risks. In the area of hardware risk analysis, reliability information plays a key role. One of the lessons learned from the Three Mile Island experience is that thorough analyses of critical components are essential. Nuclear grade equipment shows some reliability advantages over commercial. However, no statistically significant difference has been found. A recent study pertaining to spacecraft electronics reliability, examined some 2500 malfunctions on more than 300 aircraft. The study classified the equipment failures into seven general categories. Design deficiencies and lack of environmental protection accounted for about half of all failures. Within each class, limited reliability modeling was performed using a Weibull failure model.

  1. MEMS reliability: The challenge and the promise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, W.M.; Tanner, D.M.; Miller, S.L.

    1998-05-01

    MicroElectroMechanical Systems (MEMS) that think, sense, act and communicate will open up a broad new array of cost effective solutions only if they prove to be sufficiently reliable. A valid reliability assessment of MEMS has three prerequisites: (1) statistical significance; (2) a technique for accelerating fundamental failure mechanisms, and (3) valid physical models to allow prediction of failures during actual use. These already exist for the microelectronics portion of such integrated systems. The challenge lies in the less well understood micromachine portions and its synergistic effects with microelectronics. This paper presents a methodology addressing these prerequisites and a description ofmore » the underlying physics of reliability for micromachines.« less

  2. Movement-related beta oscillations show high intra-individual reliability.

    PubMed

    Espenhahn, Svenja; de Berker, Archy O; van Wijk, Bernadette C M; Rossiter, Holly E; Ward, Nick S

    2017-02-15

    Oscillatory activity in the beta frequency range (15-30Hz) recorded from human sensorimotor cortex is of increasing interest as a putative biomarker of motor system function and dysfunction. Despite its increasing use in basic and clinical research, surprisingly little is known about the test-retest reliability of spectral power and peak frequency measures of beta oscillatory signals from sensorimotor cortex. Establishing that these beta measures are stable over time in healthy populations is a necessary precursor to their use in the clinic. Here, we used scalp electroencephalography (EEG) to evaluate intra-individual reliability of beta-band oscillations over six sessions, focusing on changes in beta activity during movement (Movement-Related Beta Desynchronization, MRBD) and after movement termination (Post-Movement Beta Rebound, PMBR). Subjects performed visually-cued unimanual wrist flexion and extension. We assessed Intraclass Correlation Coefficients (ICC) and between-session correlations for spectral power and peak frequency measures of movement-related and resting beta activity. Movement-related and resting beta power from both sensorimotor cortices was highly reliable across sessions. Resting beta power yielded highest reliability (average ICC=0.903), followed by MRBD (average ICC=0.886) and PMBR (average ICC=0.663). Notably, peak frequency measures yielded lower ICC values compared to the assessment of spectral power, particularly for movement-related beta activity (ICC=0.386-0.402). Our data highlight that power measures of movement-related beta oscillations are highly reliable, while corresponding peak frequency measures show greater intra-individual variability across sessions. Importantly, our finding that beta power estimates show high intra-individual reliability over time serves to validate the notion that these measures reflect meaningful individual differences that can be utilised in basic research and clinical studies. Copyright © 2016 The

  3. Increasing the reliability of the fluid/crystallized difference score from the Kaufman Adolescent and Adult Intelligence Test with reliable component analysis.

    PubMed

    Caruso, J C

    2001-06-01

    The unreliability of difference scores is a well documented phenomenon in the social sciences and has led researchers and practitioners to interpret differences cautiously, if at all. In the case of the Kaufman Adult and Adolescent Intelligence Test (KAIT), the unreliability of the difference between the Fluid IQ and the Crystallized IQ is due to the high correlation between the two scales. The consequences of the lack of precision with which differences are identified are wide confidence intervals and unpowerful significance tests (i.e., large differences are required to be declared statistically significant). Reliable component analysis (RCA) was performed on the subtests of the KAIT in order to address these problems. RCA is a new data reduction technique that results in uncorrelated component scores with maximum proportions of reliable variance. Results indicate that the scores defined by RCA have discriminant and convergent validity (with respect to the equally weighted scores) and that differences between the scores, derived from a single testing session, were more reliable than differences derived from equal weighting for each age group (11-14 years, 15-34 years, 35-85+ years). This reliability advantage results in narrower confidence intervals around difference scores and smaller differences required for statistical significance.

  4. Measurement and Reliability of Response Inhibition

    PubMed Central

    Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.

    2012-01-01

    Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308

  5. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  6. Videotape Reliability: A Method of Evaluation of a Clinical Performance Examination.

    ERIC Educational Resources Information Center

    And Others; Liu, Philip

    1980-01-01

    A method of statistically analyzing clinical performance examinations for reliability and the application of this method in determining the reliability of two examinations of skill in administering anesthesia are described. Videotaped performances for the Spinal Anesthesia Skill Examination and the Anesthesia Setup and Machine Checkout Examination…

  7. The Z {yields} cc-bar {yields} {gamma}{gamma}*, Z {yields} bb-bar {yields} {gamma}{gamma}* triangle diagrams and the Z {yields} {gamma}{psi}, Z {yields} {gamma}Y decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achasov, N. N., E-mail: achasov@math.nsc.ru

    2011-03-15

    The approach to the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decay study is presented in detail, based on the sum rules for the Z {yields} cc-bar {yields} {gamma}{gamma}* and Z {yields} bb-bar {yields} {gamma}{gamma}* amplitudes and their derivatives. The branching ratios of the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decays are calculated for different hypotheses on saturation of the sum rules. The lower bounds of {Sigma}{sub {psi}} BR(Z {yields} {gamma}{psi}) = 1.95 Multiplication-Sign 10{sup -7} and {Sigma}{sub {upsilon}} BR(Z {yields} {gamma}Y) = 7.23 Multiplication-Sign 10{sup -7} are found. Deviations from the lower bounds are discussed, including the possibilitymore » of BR(Z {yields} {gamma}J/{psi}(1S)) {approx} BR(Z {yields} {gamma}Y(1S)) {approx} 10{sup -6}, that could be probably measured in LHC. The angular distributions in the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decays are also calculated.« less

  8. Yield of illicit indoor cannabis cultivation in the Netherlands.

    PubMed

    Toonen, Marcel; Ribot, Simon; Thissen, Jac

    2006-09-01

    To obtain a reliable estimation on the yield of illicit indoor cannabis cultivation in The Netherlands, cannabis plants confiscated by the police were used to determine the yield of dried female flower buds. The developmental stage of flower buds of the seized plants was described on a scale from 1 to 10 where the value of 10 indicates a fully developed flower bud ready for harvesting. Using eight additional characteristics describing the grow room and cultivation parameters, regression analysis with subset selection was carried out to develop two models for the yield of indoor cannabis cultivation. The median Dutch illicit grow room consists of 259 cannabis plants, has a plant density of 15 plants/m(2), and 510 W of growth lamps per m(2). For the median Dutch grow room, the predicted yield of female flower buds at the harvestable developmental stage (stage 10) was 33.7 g/plant or 505 g/m(2).

  9. Reliability analysis of structural ceramics subjected to biaxial flexure

    NASA Technical Reports Server (NTRS)

    Chao, Luen-Yuan; Shetty, Dinesh K.

    1991-01-01

    The reliability of alumina disks subjected to biaxial flexure is predicted on the basis of statistical fracture theory using a critical strain energy release rate fracture criterion. Results on a sintered silicon nitride are consistent with reliability predictions based on pore-initiated penny-shaped cracks with preferred orientation normal to the maximum principal stress. Assumptions with regard to flaw types and their orientations in each ceramic can be justified by fractography. It is shown that there are no universal guidelines for selecting fracture criteria or assuming flaw orientations in reliability analyses.

  10. Field reliability of Ricor microcoolers

    NASA Astrophysics Data System (ADS)

    Pundak, N.; Porat, Z.; Barak, M.; Zur, Y.; Pasternak, G.

    2009-05-01

    Over the recent 25 years Ricor has fielded in excess of 50,000 Stirling cryocoolers, among which approximately 30,000 units are of micro integral rotary driven type. The statistical population of the fielded units is counted in thousands/ hundreds per application category. In contrast to MTTF values as gathered and presented based on standard reliability demonstration tests, where the failure of the weakest component dictates the end of product life, in the case of field reliability, where design and workmanship failures are counted and considered, the values are usually reported in number of failures per million hours of operation. These values are important and relevant to the prediction of service capabilities and plan.

  11. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  12. Statistical summaries of fatigue data for design purposes

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.

    1983-01-01

    Two methods are discussed for constructing a design curve on the safe side of fatigue data. Both the tolerance interval and equivalent prediction interval (EPI) concepts provide such a curve while accounting for both the distribution of the estimators in small samples and the data scatter. The EPI is also useful as a mechanism for providing necessary statistics on S-N data for a full reliability analysis which includes uncertainty in all fatigue design factors. Examples of statistical analyses of the general strain life relationship are presented. The tolerance limit and EPI techniques for defining a design curve are demonstrated. Examples usng WASPALOY B and RQC-100 data demonstrate that a reliability model could be constructed by considering the fatigue strength and fatigue ductility coefficients as two independent random variables. A technique given for establishing the fatigue strength for high cycle lives relies on an extrapolation technique and also accounts for "runners." A reliability model or design value can be specified.

  13. Binge Eating Disorder: Reliability and Validity of a New Diagnostic Category.

    ERIC Educational Resources Information Center

    Brody, Michelle L.; And Others

    1994-01-01

    Examined reliability and validity of binge eating disorder (BED), proposed for inclusion in Diagnostic and Statistical Manual of Mental Disorders (DSM), fourth edition. Interrater reliability of BED diagnosis compared favorably with that of most diagnoses in DSM revised third edition. Study comparing obese individuals with and without BED and…

  14. Comparison of CEAS and Williams-type models for spring wheat yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1982-01-01

    The CEAS and Williams-type yield models are both based on multiple regression analysis of historical time series data at CRD level. The CEAS model develops a separate relation for each CRD; the Williams-type model pools CRD data to regional level (groups of similar CRDs). Basic variables considered in the analyses are USDA yield, monthly mean temperature, monthly precipitation, and variables derived from these. The Williams-type model also used soil texture and topographic information. Technological trend is represented in both by piecewise linear functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test of each model (1970-1979) demonstrate that the models are very similar in performance in all respects. Both models are about equally objective, adequate, timely, simple, and inexpensive. Both consider scientific knowledge on a broad scale but not in detail. Neither provides a good current measure of modeled yield reliability. The CEAS model is considered very slightly preferable for AgRISTARS applications.

  15. Statistically Valid Planting Trials

    Treesearch

    C. B. Briscoe

    1961-01-01

    More than 100 million tree seedlings are planted each year in Latin America, and at least ten time'that many should be planted Rational control and development of a program of such magnitude require establishing and interpreting carefully planned trial plantings which will yield statistically valid answers to real and important questions. Unfortunately, many...

  16. NDE reliability and probability of detection (POD) evolution and paradigm shift

    NASA Astrophysics Data System (ADS)

    Singh, Surendra

    2014-02-01

    The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed "Have Cracks - Will Travel" or in short "Have Cracks" by Lockheed Georgia Company for US Air Force during 1974-1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability &Reproducibility (Gage R&R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated

  17. NDE reliability and probability of detection (POD) evolution and paradigm shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Surendra

    2014-02-18

    The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed “Have Cracks – Will Travel” or in short “Have Cracks” by Lockheed Georgia Company for US Air Force during 1974–1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823Amore » (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability and Reproducibility (Gage R and R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale

  18. Quantifying the potential for reservoirs to secure future surface water yields in the world’s largest river basins

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Parkinson, Simon; Gidden, Matthew; Byers, Edward; Satoh, Yusuke; Riahi, Keywan; Forman, Barton

    2018-04-01

    Surface water reservoirs provide us with reliable water supply, hydropower generation, flood control and recreation services. Yet reservoirs also cause flow fragmentation in rivers and lead to flooding of upstream areas, thereby displacing existing land-use activities and ecosystems. Anticipated population growth and development coupled with climate change in many regions of the globe suggests a critical need to assess the potential for future reservoir capacity to help balance rising water demands with long-term water availability. Here, we assess the potential of large-scale reservoirs to provide reliable surface water yields while also considering environmental flows within 235 of the world’s largest river basins. Maps of existing cropland and habitat conservation zones are integrated with spatially-explicit population and urbanization projections from the Shared Socioeconomic Pathways to identify regions unsuitable for increasing water supply by exploiting new reservoir storage. Results show that even when maximizing the global reservoir storage to its potential limit (∼4.3–4.8 times the current capacity), firm yields would only increase by about 50% over current levels. However, there exist large disparities across different basins. The majority of river basins in North America are found to gain relatively little firm yield by increasing storage capacity, whereas basins in Southeast Asia display greater potential for expansion as well as proportional gains in firm yield under multiple uncertainties. Parts of Europe, the United States and South America show relatively low reliability of maintaining current firm yields under future climate change, whereas most of Asia and higher latitude regions display comparatively high reliability. Findings from this study highlight the importance of incorporating different factors, including human development, land-use activities, and climate change, over a time span of multiple decades and across a range of different

  19. Reliability of abstracting performance measures: results of the cardiac rehabilitation referral and reliability (CR3) project.

    PubMed

    Thomas, Randal J; Chiu, Jensen S; Goff, David C; King, Marjorie; Lahr, Brian; Lichtman, Steven W; Lui, Karen; Pack, Quinn R; Shahriary, Melanie

    2014-01-01

    Assessment of the reliability of performance measure (PM) abstraction is an important step in PM validation. Reliability has not been previously assessed for abstracting PMs for the referral of patients to cardiac rehabilitation (CR) and secondary prevention (SP) programs. To help validate these PMs, we carried out a multicenter assessment of their reliability. Hospitals and clinical practices from around the United States were invited to participate in the Cardiac Rehabilitation Referral Reliability (CR3) Project. Twenty-nine hospitals and 23 outpatient centers expressed interest in participating. Seven hospitals and 6 outpatient centers met participation criteria and submitted completed data. Site coordinators identified 35 patients whose charts were reviewed by 2 site abstractors twice, 1 week apart. Percent agreement and the Cohen κ statistic were used to describe intra- and interabstractor reliability for patient eligibility for CR/SP, patient exceptions for CR/SP referral, and documented referral to CR/SP. Results were obtained from within-site data, as well as from pooled data of all inpatient and all outpatient sites. We found that intra-abstractor reliability reflected excellent repeatability (≥ 90% agreement; κ ≥ 0.75) for ratings of CR/SP eligibility, exceptions, and referral, both from pooled and site-specific analyses of inpatient and outpatient data. Similarly, the interabstractor agreement from pooled analysis ranged from good to excellent for the 3 items, although with slightly lower measures of reliability. Abstraction of PMs for CR/SP referral has high reliability, supporting the use of these PMs in quality improvement initiatives aimed at increasing CR/SP delivery to patients with cardiovascular disease.

  20. Assessing disease stress and modeling yield losses in alfalfa

    NASA Astrophysics Data System (ADS)

    Guan, Jie

    Alfalfa is the most important forage crop in the U.S. and worldwide. Fungal foliar diseases are believed to cause significant yield losses in alfalfa, yet, little quantitative information exists regarding the amount of crop loss. Different fungicides and application frequencies were used as tools to generate a range of foliar disease intensities in Ames and Nashua, IA. Visual disease assessments (disease incidence, disease severity, and percentage defoliation) were obtained weekly for each alfalfa growth cycle (two to three growing cycles per season). Remote sensing assessments were performed using a hand-held, multispectral radiometer to measure the amount and quality of sunlight reflected from alfalfa canopies. Factors such as incident radiation, sun angle, sensor height, and leaf wetness were all found to significantly affect the percentage reflectance of sunlight reflected from alfalfa canopies. The precision of visual and remote sensing assessment methods was quantified. Precision was defined as the intra-rater repeatability and inter-rater reliability of assessment methods. F-tests, slopes, intercepts, and coefficients of determination (R2) were used to compare assessment methods for precision. Results showed that among the three visual disease assessment methods (disease incidence, disease severity, and percentage defoliation), percentage defoliation had the highest intra-rater repeatability and inter-rater reliability. Remote sensing assessment method had better precision than the percentage defoliation assessment method based upon higher intra-rater repeatability and inter-rater reliability. Significant linear relationships between canopy reflectance (810 nm), percentage defoliation and yield were detected using linear regression and percentage reflectance (810 nm) assessments were found to have a stronger relationship with yield than percentage defoliation assessments. There were also significant linear relationships between percentage defoliation, dry

  1. The optimal duration of frequency-volume charts related to compliance and reliability.

    PubMed

    van Haarst, Ernst P; Bosch, J L H Ruud

    2014-03-01

    To assess Frequency-volume charts (FVCs) for the yield of additional recorded days and the ideal duration of recording related to compliance and reliability. Of 500 consecutive urologic outpatients willing to complete a 7-day FVC, 378 FVCs were evaluable. During seven consecutive days every voiding time and volume were recorded. Missed entries were indicated with a coded letter, thereby assessing the true frequency and compliance. Reliability is the agreement of the day-to-day FVC parameters with the 7-day FVC pattern. Single-day reliability was assessed and used in the Spearman-Brown formula. FVCs of 228 male and 150 females were evaluated. Mean age was 55.2 years (standard deviation [SD]: 16.2 years), and mean 24-hr urine production was 1,856 ml (SD: 828 ml). The percentage of patients with complete FVCs decreased from 78% on day 2 to 58% on day 7, and dropped below 70% after 4 days. Single-day reliability was r = 0.63 for nocturnal urine production, r = 0.72 for 24-hr urine production, and r = 0.80 for mean voided volume. At 5 days, reliability of 90% was achieved for all parameters. With each additional day, FVCs showed a decrease in compliance and an increase in reliability. At day 3, reliability of 80% was achieved for all FVC parameters, but compliance dropped to 73%. Beyond 5 days, the yield of additional recorded days was limited. We advocate an FVC duration of 3 days, but the duration may be shortened or extended depending on the goal of the FVC. © 2013 Wiley Periodicals, Inc.

  2. Reliability and One-Year Stability of the PIN3 Neighborhood Environmental Audit in Urban and Rural Neighborhoods.

    PubMed

    Porter, Anna K; Wen, Fang; Herring, Amy H; Rodríguez, Daniel A; Messer, Lynne C; Laraia, Barbara A; Evenson, Kelly R

    2018-06-01

    Reliable and stable environmental audit instruments are needed to successfully identify the physical and social attributes that may influence physical activity. This study described the reliability and stability of the PIN3 environmental audit instrument in both urban and rural neighborhoods. Four randomly sampled road segments in and around a one-quarter mile buffer of participants' residences from the Pregnancy, Infection, and Nutrition (PIN3) study were rated twice, approximately 2 weeks apart. One year later, 253 of the year 1 sampled roads were re-audited. The instrument included 43 measures that resulted in 73 item scores for calculation of percent overall agreement, kappa statistics, and log-linear models. For same-day reliability, 81% of items had moderate to outstanding kappa statistics (kappas ≥ 0.4). Two-week reliability was slightly lower, with 77% of items having moderate to outstanding agreement using kappa statistics. One-year stability had 68% of items showing moderate to outstanding agreement using kappa statistics. The reliability of the audit measures was largely consistent when comparing urban to rural locations, with only 8% of items exhibiting significant differences (α < 0.05) by urbanicity. The PIN3 instrument is a reliable and stable audit tool for studies assessing neighborhood attributes in urban and rural environments.

  3. Characterizing reliability in a product/process design-assurance program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerscher, W.J. III; Booker, J.M.; Bement, T.R.

    1997-10-01

    Over the years many advancing techniques in the area of reliability engineering have surfaced in the military sphere of influence, and one of these techniques is Reliability Growth Testing (RGT). Private industry has reviewed RGT as part of the solution to their reliability concerns, but many practical considerations have slowed its implementation. It`s objective is to demonstrate the reliability requirement of a new product with a specified confidence. This paper speaks directly to that objective but discusses a somewhat different approach to achieving it. Rather than conducting testing as a continuum and developing statistical confidence bands around the results, thismore » Bayesian updating approach starts with a reliability estimate characterized by large uncertainty and then proceeds to reduce the uncertainty by folding in fresh information in a Bayesian framework.« less

  4. An Investigation of the Impact of Guessing on Coefficient α and Reliability

    PubMed Central

    2014-01-01

    Guessing is known to influence the test reliability of multiple-choice tests. Although there are many studies that have examined the impact of guessing, they used rather restrictive assumptions (e.g., parallel test assumptions, homogeneous inter-item correlations, homogeneous item difficulty, and homogeneous guessing levels across items) to evaluate the relation between guessing and test reliability. Based on the item response theory (IRT) framework, this study investigated the extent of the impact of guessing on reliability under more realistic conditions where item difficulty, item discrimination, and guessing levels actually vary across items with three different test lengths (TL). By accommodating multiple item characteristics simultaneously, this study also focused on examining interaction effects between guessing and other variables entered in the simulation to be more realistic. The simulation of the more realistic conditions and calculations of reliability and classical test theory (CTT) item statistics were facilitated by expressing CTT item statistics, coefficient α, and reliability in terms of IRT model parameters. In addition to the general negative impact of guessing on reliability, results showed interaction effects between TL and guessing and between guessing and test difficulty.

  5. Statistics 101 for Radiologists.

    PubMed

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  6. Predicting red meat yields in carcasses from beef-type and calf-fed Holstein steers using the United States Department of Agriculture calculated yield grade.

    PubMed

    Lawrence, T E; Elam, N A; Miller, M F; Brooks, J C; Hilton, G G; VanOverbeke, D L; McKeith, F K; Killefer, J; Montgomery, T H; Allen, D M; Griffin, D B; Delmore, R J; Nichols, W T; Streeter, M N; Yates, D A; Hutcheson, J P

    2010-06-01

    Analyses were conducted to evaluate the ability of the USDA yield grade equation to detect differences in subprimal yield of beef-type steers and calf-fed Holstein steers that had been fed zilpaterol hydrochloride (ZH; Intervet Inc., Millsboro, DE) as well as those that had not been fed ZH. Beef-type steer (n = 801) and calf-fed Holstein steer (n = 235) carcasses were fabricated into subprimal cuts and trim. Simple correlations between calculated yield grades and total red meat yields ranged from -0.56 to -0.62 for beef-type steers. Reliable correlations from calf-fed Holstein steers were unobtainable; the probability of a type I error met or exceeded 0.39. Linear models were developed for the beef-type steers to predict total red meat yield based on calculated USDA yield grade within each ZH duration. At an average calculated USDA yield grade of 2.9, beef-type steer carcasses that had not been fed ZH had an estimated 69.4% red meat yield, whereas those fed ZH had an estimated 70.7% red meat yield. These results indicate that feeding ZH increased red meat yield by 1.3% at a constant calculated yield grade. However, these data also suggest that the calculated USDA yield grade score is a poor and variable estimator (adjusted R(2) of 0.31 to 0.38) of total red meat yield of beef-type steer carcasses, regardless of ZH feeding. Moreover, no relationship existed (adjusted R(2) of 0.00 to 0.01) for calf-fed Holstein steer carcasses, suggesting the USDA yield grade is not a valid estimate of calf-fed Holstein red meat yield.

  7. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, David (Donhang)

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t=0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  8. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  9. Are photographic records reliable for orthodontic screening?

    PubMed

    Mandall, N A

    2002-06-01

    The aim of the study was to evaluate the reliability of a panel of orthodontists for accepting new patient referrals based on clinical photographs. Eight orthodontists from Greater Manchester, Lancashire, Chester, and Derbyshire observed clinical photographs of 40 consecutive new patients attending the orthodontic department, Hope Hospital, Salford. They recorded whether or not they would accept the patient, as a new patient referral, in their department. Each consultant was asked to take into account factors, such as oral hygiene, dental development, and severity of the malocclusion. Kappa statistic for multiple-rater agreement and kappa statistic for intra-observer reliability were calculated. Inter-observer panel agreement for accepting new patient referrals based on photographic information was low (multiple rater kappa score 0.37). Intra-examiner agreement was better (kappa range 0.34-0.90). Clinician agreement for screening and accepting orthodontic referrals based on clinical photographs is comparable to that previously reported for other clinical decision making.

  10. Sediment yield estimation in mountain catchments of the Camastra reservoir, southern Italy: a comparison among different empirical methods

    NASA Astrophysics Data System (ADS)

    Lazzari, Maurizio; Danese, Maria; Gioia, Dario; Piccarreta, Marco

    2013-04-01

    Sedimentary budget estimation is an important topic for both scientific and social community, because it is crucial to understand both dynamics of orogenic belts and many practical problems, such as soil conservation and sediment accumulation in reservoir. Estimations of sediment yield or denudation rates in southern-central Italy are generally obtained by simple empirical relationships based on statistical regression between geomorphic parameters of the drainage network and the measured suspended sediment yield at the outlet of several drainage basins or through the use of models based on sediment delivery ratio or on soil loss equations. In this work, we perform a study of catchment dynamics and an estimation of sedimentary yield for several mountain catchments of the central-western sector of the Basilicata region, southern Italy. Sediment yield estimation has been obtained through both an indirect estimation of suspended sediment yield based on the Tu index (mean annual suspension sediment yield, Ciccacci et al., 1980) and the application of the Rusle (Renard et al., 1997) and the USPED (Mitasova et al., 1996) empirical methods. The preliminary results indicate a reliable difference between the RUSLE and USPED methods and the estimation based on the Tu index; a critical data analysis of results has been carried out considering also the present-day spatial distribution of erosion, transport and depositional processes in relation to the maps obtained from the application of those different empirical methods. The studied catchments drain an artificial reservoir (i.e. the Camastra dam), where a detailed evaluation of the amount of historical sediment storage has been collected. Sediment yield estimation obtained by means of the empirical methods have been compared and checked with historical data of sediment accumulation measured in the artificial reservoir of the Camastra dam. The validation of such estimations of sediment yield at the scale of large catchments

  11. Statistical methods and neural network approaches for classification of data from multiple sources

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon Atli; Swain, Philip H.

    1990-01-01

    Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.

  12. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation

    PubMed Central

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  13. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    PubMed

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  14. Multisite Reliability of Cognitive BOLD Data

    PubMed Central

    Brown, Gregory G.; Mathalon, Daniel H.; Stern, Hal; Ford, Judith; Mueller, Bryon; Greve, Douglas N.; McCarthy, Gregory; Voyvodic, Jim; Glover, Gary; Diaz, Michele; Yetter, Elizabeth; Burak Ozyurt, I.; Jorgensen, Kasper W.; Wible, Cynthia G.; Turner, Jessica A.; Thompson, Wesley K.; Potkin, Steven G.

    2010-01-01

    Investigators perform multi-site functional magnetic resonance imaging studies to increase statistical power, to enhance generalizability, and to improve the likelihood of sampling relevant subgroups. Yet undesired site variation in imaging methods could off-set these potential advantages. We used variance components analysis to investigate sources of variation in the blood oxygen level dependent (BOLD) signal across four 3T magnets in voxelwise and region of interest (ROI) analyses. Eighteen participants traveled to four magnet sites to complete eight runs of a working memory task involving emotional or neutral distraction. Person variance was more than 10 times larger than site variance for five of six ROIs studied. Person-by-site interactions, however, contributed sizable unwanted variance to the total. Averaging over runs increased between-site reliability, with many voxels showing good to excellent between-site reliability when eight runs were averaged and regions of interest showing fair to good reliability. Between-site reliability depended on the specific functional contrast analyzed in addition to the number of runs averaged. Although median effect size was correlated with between-site reliability, dissociations were observed for many voxels. Brain regions where the pooled effect size was large but between-site reliability was poor were associated with reduced individual differences. Brain regions where the pooled effect size was small but between-site reliability was excellent were associated with a balance of participants who displayed consistently positive or consistently negative BOLD responses. Although between-site reliability of BOLD data can be good to excellent, acquiring highly reliable data requires robust activation paradigms, ongoing quality assurance, and careful experimental control. PMID:20932915

  15. Scale for positive aspects of caregiving experience: development, reliability, and factor structure.

    PubMed

    Kate, N; Grover, S; Kulhara, P; Nehra, R

    2012-06-01

    OBJECTIVE. To develop an instrument (Scale for Positive Aspects of Caregiving Experience [SPACE]) that evaluates positive caregiving experience and assess its psychometric properties. METHODS. Available scales which assess some aspects of positive caregiving experience were reviewed and a 50-item questionnaire with a 5-point rating was constructed. In all, 203 primary caregivers of patients with severe mental disorders were asked to complete the questionnaire. Internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity were evaluated. Principal component factor analysis was run to assess the factorial validity of the scale. RESULTS. The scale developed as part of the study was found to have good internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity. Principal component factor analysis yielded a 4-factor structure, which also had good test-retest reliability and cross-language reliability. There was a strong correlation between the 4 factors obtained. CONCLUSION. The SPACE developed as part of this study has good psychometric properties.

  16. WHO Study on the reliability and validity of the alcohol and drug use disorder instruments: overview of methods and results.

    PubMed

    Ustün, B; Compton, W; Mager, D; Babor, T; Baiyewu, O; Chatterji, S; Cottler, L; Göğüş, A; Mavreas, V; Peters, L; Pull, C; Saunders, J; Smeets, R; Stipec, M R; Vrasti, R; Hasin, D; Room, R; Van den Brink, W; Regier, D; Blaine, J; Grant, B F; Sartorius, N

    1997-09-25

    The WHO Study on the reliability and validity of the alcohol and drug use disorder instruments in an international study which has taken place in centres in ten countries, aiming to test the reliability and validity of three diagnostic instruments for alcohol and drug use disorders: the Composite International Diagnostic Interview (CIDI), the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) and a special version of the Alcohol Use Disorder and Associated Disabilities Interview schedule-alcohol/drug-revised (AUDADIS-ADR). The purpose of the reliability and validity (R&V) study is to further develop the alcohol and drug sections of these instruments so that a range of substance-related diagnoses can be made in a systematic, consistent, and reliable way. The study focuses on new criteria proposed in the tenth revision of the International Classification of Diseases (ICD-10) and the fourth revision of the diagnostic and statistical manual of mental disorders (DSM-IV) for dependence, harmful use and abuse categories for alcohol and psychoactive substance use disorders. A systematic study including a scientifically rigorous measure of reliability (i.e. 1 week test-retest reliability) and validity (i.e. comparison between clinical and non-clinical measures) has been undertaken. Results have yielded useful information on reliability and validity of these instruments at diagnosis, criteria and question level. Overall the diagnostic concordance coefficients (kappa, kappa) were very good for dependence disorders (0.7-0.9), but were somewhat lower for the abuse and harmful use categories. The comparisons among instruments and independent clinical evaluations and debriefing interviews gave important information about possible sources of unreliability, and provided useful clues on the applicability and consistency of nosological concepts across cultures.

  17. Linking Field and Satellite Observations to Reveal Differences in Single vs. Double-Cropped Soybean Yields in Central Brazil

    NASA Astrophysics Data System (ADS)

    Jeffries, G. R.; Cohn, A.

    2016-12-01

    Soy-corn double cropping (DC) has been widely adopted in Central Brazil alongside single cropped (SC) soybean production. DC involves different cropping calendars, soy varieties, and may be associated with different crop yield patterns and volatility than SC. Study of the performance of the region's agriculture in a changing climate depends on tracking differences in the productivity of SC vs. DC, but has been limited by crop yield data that conflate the two systems. We predicted SC and DC yields across Central Brazil, drawing on field observations and remotely sensed data. We first modeled field yield estimates as a function of remotely sensed DC status and vegetation index (VI) metrics, and other management and biophysical factors. We then used the statistical model estimated to predict SC and DC soybean yields at each 500 m2 grid cell of Central Brazil for harvest years 2001 - 2015. The yield estimation model was constructed using 1) a repeated cross-sectional survey of soybean yields and management factors for years 2007-2015, 2) a custom agricultural land cover classification dataset which assimilates earlier datasets for the region, and 3) 500m 8-day MODIS image composites used to calculate the wide dynamic range vegetation index (WDRVI) and derivative metrics such as area under the curve for WDRVI values in critical crop development periods. A statistical yield estimation model which primarily entails WDRVI metrics, DC status, and spatial fixed effects was developed on a subset of the yield dataset. Model validation was conducted by predicting previously withheld yield records, and then assessing error and goodness-of-fit for predicted values with metrics including root mean squared error (RMSE), mean squared error (MSE), and R2. We found a statistical yield estimation model which incorporates WDRVI and DC status to be way to estimate crop yields over the region. Statistical properties of the resulting gridded yield dataset may be valuable for understanding

  18. Annual Corn Yield Estimation through Multi-temporal MODIS Data

    NASA Astrophysics Data System (ADS)

    Shao, Y.; Zheng, B.; Campbell, J. B.

    2013-12-01

    This research employed 13 years of the Moderate Resolution Imaging Spectroradiometer (MODIS) to estimate annual corn yield for the Midwest of the United States. The overall objective of this study was to examine if annual corn yield could be accurately predicted using MODIS time-series NDVI (Normalized Difference Vegetation Index) and ancillary data such monthly precipitation and temperature. MODIS-NDVI 16-Day composite images were acquired from the USGS EROS Data Center for calendar years 2000 to 2012. For the same time-period, county level corn yield statistics were obtained from the National Agricultural Statistics Service (NASS). The monthly precipitation and temperature measures were derived from Precipitation-Elevation Regressions on Independent Slopes Model (PRISM) climate data. A cropland mask was derived using 2006 National Land Cover Database. For each county and within the cropland mask, the MODIS-NDVI time-series data and PRISM climate data were spatially averaged, at their respective time steps. We developed a random forest predictive model with the MODIS-NDVI and climate data as predictors and corn yield as response. To assess the model accuracy, we used twelve years of data as training and the remaining year as hold-out testing set. The training and testing procedures were repeated 13 times. The R2 ranged from 0.72 to 0.83 for testing years. It was also found that the inclusion of climate data did not improve the model predictive performance. MODIS-NDVI time-series data alone might provide sufficient information for county level corn yield prediction.

  19. Reliability reporting across studies using the Buss Durkee Hostility Inventory.

    PubMed

    Vassar, Matt; Hale, William

    2009-01-01

    Empirical research on anger and hostility has pervaded the academic literature for more than 50 years. Accurate measurement of anger/hostility and subsequent interpretation of results requires that the instruments yield strong psychometric properties. For consistent measurement, reliability estimates must be calculated with each administration, because changes in sample characteristics may alter the scale's ability to generate reliable scores. Therefore, the present study was designed to address reliability reporting practices for a widely used anger assessment, the Buss Durkee Hostility Inventory (BDHI). Of the 250 published articles reviewed, 11.2% calculated and presented reliability estimates for the data at hand, 6.8% cited estimates from a previous study, and 77.1% made no mention of score reliability. Mean alpha estimates of scores for BDHI subscales generally fell below acceptable standards. Additionally, no detectable pattern was found between reporting practices and publication year or journal prestige. Areas for future research are also discussed.

  20. The Information Function for the One-Parameter Logistic Model: Is it Reliability?

    ERIC Educational Resources Information Center

    Doran, Harold C.

    2005-01-01

    The information function is an important statistic in item response theory (IRT) applications. Although the information function is often described as the IRT version of reliability, it differs from the classical notion of reliability from a critical perspective: replication. This article first explores the information function for the…

  1. Factors that Affect Operational Reliability of Turbojet Engines

    NASA Technical Reports Server (NTRS)

    1956-01-01

    The problem of improving operational reliability of turbojet engines is studied in a series of papers. Failure statistics for this engine are presented, the theory and experimental evidence on how engine failures occur are described, and the methods available for avoiding failure in operation are discussed. The individual papers of the series are Objectives, Failure Statistics, Foreign-Object Damage, Compressor Blades, Combustor Assembly, Nozzle Diaphrams, Turbine Buckets, Turbine Disks, Rolling Contact Bearings, Engine Fuel Controls, and Summary Discussion.

  2. Further empirical data on the psychoeducational profile-revised (PEP-R): reliability and validation with the Vineland adaptive behavior scales.

    PubMed

    Villa, Susanna; Micheli, Enrico; Villa, Laura; Pastore, Valentina; Crippa, Alessandro; Molteni, Massimo

    2010-03-01

    The PEP-R (psychoeducational profile revised) is an instrument that has been used in many countries to assess abilities and formulate treatment programs for children with autism and related developmental disorders. To the end to provide further information on the PEP-R's psychometric properties, a large sample (N = 137) of children presenting Autistic Disorder symptoms under the age of 12 years, including low-functioning individuals, was examined. Results yielded data of interest especially in terms of: Cronbach's alpha, interrater reliability, and validation with the Vineland Adaptive Behavior Scales. These findings help complete the instrument's statistical description and augment its usefulness, not only in designing treatment programs for these individuals, but also as an instrument for verifying the efficacy of intervention.

  3. REANALYSIS OF F-STATISTIC GRAVITATIONAL-WAVE SEARCHES WITH THE HIGHER CRITICISM STATISTIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, M. F.; Melatos, A.; Delaigle, A.

    2013-04-01

    We propose a new method of gravitational-wave detection using a modified form of higher criticism, a statistical technique introduced by Donoho and Jin. Higher criticism is designed to detect a group of sparse, weak sources, none of which are strong enough to be reliably estimated or detected individually. We apply higher criticism as a second-pass method to synthetic F-statistic and C-statistic data for a monochromatic periodic source in a binary system and quantify the improvement relative to the first-pass methods. We find that higher criticism on C-statistic data is more sensitive by {approx}6% than the C-statistic alone under optimal conditionsmore » (i.e., binary orbit known exactly) and the relative advantage increases as the error in the orbital parameters increases. Higher criticism is robust even when the source is not monochromatic (e.g., phase-wandering in an accreting system). Applying higher criticism to a phase-wandering source over multiple time intervals gives a {approx}> 30% increase in detectability with few assumptions about the frequency evolution. By contrast, in all-sky searches for unknown periodic sources, which are dominated by the brightest source, second-pass higher criticism does not provide any benefits over a first-pass search.« less

  4. Leveraging Code Comments to Improve Software Reliability

    ERIC Educational Resources Information Center

    Tan, Lin

    2009-01-01

    Commenting source code has long been a common practice in software development. This thesis, consisting of three pieces of work, made novel use of the code comments written in natural language to improve software reliability. Our solution combines Natural Language Processing (NLP), Machine Learning, Statistics, and Program Analysis techniques to…

  5. A Comparison of Reliability and Construct Validity between the Original and Revised Versions of the Rosenberg Self-Esteem Scale

    PubMed Central

    Nahathai, Wongpakaran

    2012-01-01

    Objective The Rosenberg Self-Esteem Scale (RSES) is a widely used instrument that has been tested for reliability and validity in many settings; however, some negative-worded items appear to have caused it to reveal low reliability in a number of studies. In this study, we revised one negative item that had previously (from the previous studies) produced the worst outcome in terms of the structure of the scale, then re-analyzed the new version for its reliability and construct validity, comparing it to the original version with respect to fit indices. Methods In total, 851 students from Chiang Mai University (mean age: 19.51±1.7, 57% of whom were female), participated in this study. Of these, 664 students completed the Thai version of the original RSES - containing five positively worded and five negatively worded items, while 187 students used the revised version containing six positively worded and four negatively worded items. Confirmatory factor analysis was applied, using a uni-dimensional model with method effects and a correlated uniqueness approach. Results The revised version showed the same level of reliability (good) as the original, but yielded a better model fit. The revised RSES demonstrated excellent fit statistics, with χ2=29.19 (df=19, n=187, p=0.063), GFI=0.970, TFI=0.969, NFI=0.964, CFI=0.987, SRMR=0.040 and RMSEA=0.054. Conclusion The revised version of the Thai RSES demonstrated an equivalent level of reliability but a better construct validity when compared to the original. PMID:22396685

  6. A comparison of reliability and construct validity between the original and revised versions of the Rosenberg Self-Esteem Scale.

    PubMed

    Wongpakaran, Tinakon; Tinakon, Wongpakaran; Wongpakaran, Nahathai; Nahathai, Wongpakaran

    2012-03-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used instrument that has been tested for reliability and validity in many settings; however, some negative-worded items appear to have caused it to reveal low reliability in a number of studies. In this study, we revised one negative item that had previously (from the previous studies) produced the worst outcome in terms of the structure of the scale, then re-analyzed the new version for its reliability and construct validity, comparing it to the original version with respect to fit indices. In total, 851 students from Chiang Mai University (mean age: 19.51±1.7, 57% of whom were female), participated in this study. Of these, 664 students completed the Thai version of the original RSES - containing five positively worded and five negatively worded items, while 187 students used the revised version containing six positively worded and four negatively worded items. Confirmatory factor analysis was applied, using a uni-dimensional model with method effects and a correlated uniqueness approach. The revised version showed the same level of reliability (good) as the original, but yielded a better model fit. The revised RSES demonstrated excellent fit statistics, with χ²=29.19 (df=19, n=187, p=0.063), GFI=0.970, TFI=0.969, NFI=0.964, CFI=0.987, SRMR=0.040 and RMSEA=0.054. The revised version of the Thai RSES demonstrated an equivalent level of reliability but a better construct validity when compared to the original.

  7. Effects of Hydrological Parameters on Palm Oil Fresh Fruit Bunch Yield)

    NASA Astrophysics Data System (ADS)

    Nda, M.; Adnan, M. S.; Suhadak, M. A.; Zakaria, M. S.; Lopa, R. T.

    2018-04-01

    Climate change effects and variability have been studied by many researchers in diverse geophysical fields. Malaysia produces large volume of palm oil, the effects of climate change on hydrological parameters (rainfall and precipitation) could have adverse effects on palm oil fresh fruit bunch (FFB) production with implications at both local and international market. It is important to understand the effects of climate change on crop yield to adopt new cultivation techniques and guaranteeing food security globally. Based on this background, the paper’s objective is to investigate the effects of rainfall and temperature pattern on crop yield (FFB) within five years period (2013 - 2017) at Batu Pahat District. The Man - Kendall rank technique (trend test) and statistical analyses (correlation and regression) were applied to the dataset used for the study. The results reveal that there are variabilities in rainfall and temperature from one month to the other and the statistical analysis reveals that the hydrological parameters have an insignificant effect on crop yield.

  8. The (un)reliability of item-level semantic priming effects.

    PubMed

    Heyman, Tom; Bruninx, Anke; Hutchison, Keith A; Storms, Gert

    2018-04-05

    Many researchers have tried to predict semantic priming effects using a myriad of variables (e.g., prime-target associative strength or co-occurrence frequency). The idea is that relatedness varies across prime-target pairs, which should be reflected in the size of the priming effect (e.g., cat should prime dog more than animal does). However, it is only insightful to predict item-level priming effects if they can be measured reliably. Thus, in the present study we examined the split-half and test-retest reliabilities of item-level priming effects under conditions that should discourage the use of strategies. The resulting priming effects proved extremely unreliable, and reanalyses of three published priming datasets revealed similar cases of low reliability. These results imply that previous attempts to predict semantic priming were unlikely to be successful. However, one study with an unusually large sample size yielded more favorable reliability estimates, suggesting that big data, in terms of items and participants, should be the future for semantic priming research.

  9. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  10. Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods

    NASA Technical Reports Server (NTRS)

    Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; hide

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  11. Similar estimates of temperature impacts on global wheat yield by three independent methods

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan

    2016-12-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  12. [Reliability and validity of Parkinson's disease sleep scale-Chinese version in the south west of China].

    PubMed

    Zhang, J H; Peng, R; Du, Y; Mou, Y; Li, N N; Cheng, L

    2016-11-08

    Objective: To evaluate the reliability and validity of Parkinson's disease sleep scale-Chinese version (CPDSS) through a study of a large PD population in southwest China, and to explore the prevalence and characteristics of sleep disorders in Parkinson's disease (PD) patients from southwest China. Methods: A total of 544 PD patients and 220 control subjects were enrolled in our study. Demographic data, CPDSS, ESS, PDQ39, HAMD and H-Y stage were assessed in all subjects. Statistical description, Cronbach's alpha coefficient, intra-class correlation coefficient ( ICC ), Spearman rank correlation coefficient and Mann-Whitney U test were used for statistical analyses. Result: The Cronbach's alpha coefficient for CPDSS was 0.79, ICC of the total scale was 0.94 and ICC of each item ranged from 0.73 to 0.97. The factor analysis yielded a five-factor solution, which explained 63.4% of the total variance. Total and each item scores of CPDSS in PD patients were lower than those in healthy controls. 69.3% of PD patients had sleep disorder, while prevalence in the control group was only 29.6%. Negative correlation was found between CPDSS and ESS. Daytime sleepiness was the most common factor (35.9%) leading to sleep disorders. The sleep disorders of PD patients in Southwest China were significantly related with the course of disease, the severity of disease, the quality of life, depression, cognitive level and motor symptoms. Conclusion: CPDSS has good feasibility, reliability and validity in PD population from southwest China. CPDSS is considered as an effective tool for the assessment of sleep disorder in PD patients.

  13. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  14. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  15. Climate variation explains a third of global crop yield variability

    PubMed Central

    Ray, Deepak K.; Gerber, James S.; MacDonald, Graham K.; West, Paul C.

    2015-01-01

    Many studies have examined the role of mean climate change in agriculture, but an understanding of the influence of inter-annual climate variations on crop yields in different regions remains elusive. We use detailed crop statistics time series for ~13,500 political units to examine how recent climate variability led to variations in maize, rice, wheat and soybean crop yields worldwide. While some areas show no significant influence of climate variability, in substantial areas of the global breadbaskets, >60% of the yield variability can be explained by climate variability. Globally, climate variability accounts for roughly a third (~32–39%) of the observed yield variability. Our study uniquely illustrates spatial patterns in the relationship between climate variability and crop yield variability, highlighting where variations in temperature, precipitation or their interaction explain yield variability. We discuss key drivers for the observed variations to target further research and policy interventions geared towards buffering future crop production from climate variability. PMID:25609225

  16. Commercially sterilized mussel meats (Mytilus chilensis): a study on process yield.

    PubMed

    Almonacid, S; Bustamante, J; Simpson, R; Urtubia, A; Pinto, M; Teixeira, A

    2012-06-01

    The processing steps most responsible for yield loss in the manufacture of canned mussel meats are the thermal treatments of precooking to remove meats from shells, and thermal processing (retorting) to render the final canned product commercially sterile for long-term shelf stability. The objective of this study was to investigate and evaluate the impact of different combinations of process variables on the ultimate drained weight in the final mussel product (Mytilu chilensis), while verifying that any differences found were statistically and economically significant. The process variables selected for this study were precooking time, brine salt concentration, and retort temperature. Results indicated 2 combinations of process variables producing the widest difference in final drained weight, designated best combination and worst combination with 35% and 29% yield, respectively. Significance of this difference was determined by employing a Bootstrap methodology, which assumes an empirical distribution of statistical error. A difference of nearly 6 percentage points in total yield was found. This represents a 20% increase in annual sales from the same quantity of raw material, in addition to increase in yield, the conditions for the best process included a retort process time 65% shorter than that for the worst process, this difference in yield could have significant economic impact, important to the mussel canning industry. © 2012 Institute of Food Technologists®

  17. Statistical evaluation of forecasts

    NASA Astrophysics Data System (ADS)

    Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn

    2014-08-01

    Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.

  18. Student and Professor Gender Effects in Introductory Business Statistics

    ERIC Educational Resources Information Center

    Haley, M. Ryan; Johnson, Marianne F.; Kuennen, Eric W.

    2007-01-01

    Studies have yielded highly mixed results as to differences in male and female student performance in statistics courses; the role that professors play in these differences is even less clear. In this paper, we consider the impact of professor and student gender on student performance in an introductory business statistics course taught by…

  19. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  20. Measurements of {Gamma}(Z{sup O} {yields} b{bar b})/{Gamma}(Z{sup O} {yields} hadrons) using the SLD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, H.A. Jr. II

    1995-07-01

    The quantity R{sub b} = {Gamma}(Z{sup o} {yields}b{bar b})/{Gamma}(Z{sup o} {yields} hadrons) is a sensitive measure of corrections to the Zbb vertex. The precision necessary to observe the top quark mass dependent corrections is close to being achieved. LEP is already observing a 1.8{sigma} deviation from the Standard Model prediction. Knowledge of the top quark mass combined with the observation of deviations from the Standard Model prediction would indicate new physics. Models which include charged Higgs or light SUSY particles yield predictions for R{sub b} appreciably different from the Standard Model. In this thesis two independent methods are used tomore » measure R{sub b}. One uses a general event tag which determines R{sub b} from the rate at which events are tagged as Z{sup o} {yields} b{bar b} in data and the estimated rates at which various flavors of events are tagged from the Monte Carlo. The second method reduces the reliance on the Monte Carlo by separately tagging each hemisphere as containing a b-decay. The rates of single hemisphere tagged events and both hemisphere tagged events are used to determine the tagging efficiency for b-quarks directly from the data thus eliminating the main sources of systematic error present in the event tag. Both measurements take advantage of the unique environment provided by the SLAC Linear Collider (SLC) and the SLAC Large Detector (SLD). From the event tag a result of R{sub b} = 0.230{plus_minus}0.004{sub statistical}{plus_minus}0.013{sub systematic} is obtained. The higher precision hemisphere tag result obtained is R{sub b} = 0.218{plus_minus}0.004{sub statistical}{plus_minus}0.004{sub systematic}{plus_minus}0.003{sub Rc}.« less

  1. Psychometrics Matter in Health Behavior: A Long-term Reliability Generalization Study.

    PubMed

    Pickett, Andrew C; Valdez, Danny; Barry, Adam E

    2017-09-01

    Despite numerous calls for increased understanding and reporting of reliability estimates, social science research, including the field of health behavior, has been slow to respond and adopt such practices. Therefore, we offer a brief overview of reliability and common reporting errors; we then perform analyses to examine and demonstrate the variability of reliability estimates by sample and over time. Using meta-analytic reliability generalization, we examined the variability of coefficient alpha scores for a well-designed, consistent, nationwide health study, covering a span of nearly 40 years. For each year and sample, reliability varied. Furthermore, reliability was predicted by a sample characteristic that differed among age groups within each administration. We demonstrated that reliability is influenced by the methods and individuals from which a given sample is drawn. Our work echoes previous calls that psychometric properties, particularly reliability of scores, are important and must be considered and reported before drawing statistical conclusions.

  2. Statistical Knowledge and the Over-Interpretation of Student Evaluations of Teaching

    ERIC Educational Resources Information Center

    Boysen, Guy A.

    2017-01-01

    Research shows that teachers interpret small differences in student evaluations of teaching as meaningful even when available statistical information indicates that the differences are not reliable. The current research explored the effect of statistical training on college teachers' tendency to over-interpret student evaluation differences. A…

  3. Effect of warming temperatures on US wheat yields.

    PubMed

    Tack, Jesse; Barkley, Andrew; Nalley, Lawton Lanier

    2015-06-02

    Climate change is expected to increase future temperatures, potentially resulting in reduced crop production in many key production regions. Research quantifying the complex relationship between weather variables and wheat yields is rapidly growing, and recent advances have used a variety of model specifications that differ in how temperature data are included in the statistical yield equation. A unique data set that combines Kansas wheat variety field trial outcomes for 1985-2013 with location-specific weather data is used to analyze the effect of weather on wheat yield using regression analysis. Our results indicate that the effect of temperature exposure varies across the September-May growing season. The largest drivers of yield loss are freezing temperatures in the Fall and extreme heat events in the Spring. We also find that the overall effect of warming on yields is negative, even after accounting for the benefits of reduced exposure to freezing temperatures. Our analysis indicates that there exists a tradeoff between average (mean) yield and ability to resist extreme heat across varieties. More-recently released varieties are less able to resist heat than older lines. Our results also indicate that warming effects would be partially offset by increased rainfall in the Spring. Finally, we find that the method used to construct measures of temperature exposure matters for both the predictive performance of the regression model and the forecasted warming impacts on yields.

  4. Random Forests for Global and Regional Crop Yield Predictions.

    PubMed

    Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung

    2016-01-01

    Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.

  5. Splitting parameter yield (SPY): A program for semiautomatic analysis of shear-wave splitting

    NASA Astrophysics Data System (ADS)

    Zaccarelli, Lucia; Bianco, Francesca; Zaccarelli, Riccardo

    2012-03-01

    SPY is a Matlab algorithm that analyzes seismic waveforms in a semiautomatic way, providing estimates of the two observables of the anisotropy: the shear-wave splitting parameters. We chose to exploit those computational processes that require less intervention by the user, gaining objectivity and reliability as a result. The algorithm joins the covariance matrix and the cross-correlation techniques, and all the computation steps are interspersed by several automatic checks intended to verify the reliability of the yields. The resulting semiautomation generates two new advantages in the field of anisotropy studies: handling a huge amount of data at the same time, and comparing different yields. From this perspective, SPY has been developed in the Matlab environment, which is widespread, versatile, and user-friendly. Our intention is to provide the scientific community with a new monitoring tool for tracking the temporal variations of the crustal stress field.

  6. Estimating variability in grain legume yields across Europe and the Americas

    NASA Astrophysics Data System (ADS)

    Cernay, Charles; Ben-Ari, Tamara; Pelzer, Elise; Meynard, Jean-Marc; Makowski, David

    2015-06-01

    Grain legume production in Europe has recently come under scrutiny. Although legume crops are often promoted to provide environmental services, European farmers tend to turn to non-legume crops. It is assumed that high variability in legume yields explains this aversion, but so far this hypothesis has not been tested. Here, we estimate the variability of major grain legume and non-legume yields in Europe and the Americas from yield time series over 1961-2013. Results show that grain legume yields are significantly more variable than non-legume yields in Europe. These differences are smaller in the Americas. Our results are robust at the level of the statistical methods. In all regions, crops with high yield variability are allocated to less than 1% of cultivated areas. Although the expansion of grain legumes in Europe may be hindered by high yield variability, some species display risk levels compatible with the development of specialized supply chains.

  7. Ceramic component reliability with the restructured NASA/CARES computer program

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.

    1992-01-01

    The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).

  8. Reliable Digit Span: A Systematic Review and Cross-Validation Study

    ERIC Educational Resources Information Center

    Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.

    2012-01-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…

  9. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  10. Societal Statistics by virtue of the Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2012-09-01

    The Drake equation, first proposed by Frank D. Drake in 1961, is the foundational equation of SETI. It yields an estimate of the number N of extraterrestrial communicating civilizations in the Galaxy given by the product N=Ns×fp×ne×fl×fi×fc×fL, where: Ns is the number of stars in the Milky Way Galaxy; fp is the fraction of stars that have planetary systems; ne is the number of planets in a given system that are ecologically suitable for life; fl is the fraction of otherwise suitable planets on which life actually arises; fi is the fraction of inhabited planets on which an intelligent form of life evolves; fc is the fraction of planets inhabited by intelligent beings on which a communicative technical civilization develops; and fL is the fraction of planetary lifetime graced by a technical civilization. The first three terms may be called "the astrophysical terms" in the Drake equation since their numerical value is provided by astrophysical considerations. The fourth term, fl, may be called "the origin-of-life term" and entails biology. The last three terms may be called "the societal terms" inasmuch as their respective numerical values are provided by anthropology, telecommunication science and "futuristic science", respectively. In this paper, we seek to provide a statistical estimate of the three societal terms in the Drake equation basing our calculations on the Statistical Drake Equation first proposed by this author at the 2008 IAC. In that paper the author extended the simple 7-factor product so as to embody Statistics. He proved that, no matter which probability distribution may be assigned to each factor, if the number of factors tends to infinity, then the random variable N follows the lognormal distribution (central limit theorem of Statistics). This author also proved at the 2009 IAC that the Dole (1964) [7] equation, yielding the number of Habitable Planets for Man in the Galaxy, has the same mathematical structure as the Drake equation. So the

  11. Final Report: Studies in Structural, Stochastic and Statistical Reliability for Communication Networks and Engineered Systems

    DTIC Science & Technology

    to do so, and (5) three distinct versions of the problem of estimating component reliability from system failure-time data are treated, each resulting inconsistent estimators with asymptotically normal distributions.

  12. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  13. Midwest agriculture and ENSO: A comparison of AVHRR NDVI3g data and crop yields in the United States Corn Belt from 1982 to 2014

    NASA Astrophysics Data System (ADS)

    Glennie, Erin; Anyamba, Assaf

    2018-06-01

    A time series of Advanced Very High Resolution Radiometer (AVHRR) derived normalized difference vegetation index (NDVI) data were compared to National Agricultural Statistics Service (NASS) corn yield data in the United States Corn Belt from 1982 to 2014. The main objectives of the comparison were to assess 1) the consistency of regional Corn Belt responses to El Niño/Southern Oscillation (ENSO) teleconnection signals, and 2) the reliability of using NDVI as an indicator of crop yield. Regional NDVI values were used to model a seasonal curve and to define the growing season - May to October. Seasonal conditions in each county were represented by NDVI and land surface temperature (LST) composites, and corn yield was represented by average annual bushels produced per acre. Correlation analysis between the NDVI, LST, corn yield, and equatorial Pacific sea surface temperature anomalies revealed patterns in land surface dynamics and corn yield, as well as typical impacts of ENSO episodes. It was observed from the study that growing seasons coincident with La Niña events were consistently warmer, but El Niño events did not consistently impact NDVI, temperature, or corn yield data. Moreover, the El Niño and La Niña composite images suggest that impacts vary spatially across the Corn Belt. While corn is the dominant crop in the region, some inconsistencies between corn yield and NDVI may be attributed to soy crops and other background interference. The overall correlation between the total growing season NDVI anomaly and detrended corn yield was 0.61(p = 0.00013), though the strength of the relationship varies across the Corn Belt.

  14. Inter-rater reliability of the Sødring Motor Evaluation of Stroke patients (SMES).

    PubMed

    Halsaa, K E; Sødring, K M; Bjelland, E; Finsrud, K; Bautz-Holter, E

    1999-12-01

    The Sødring Motor Evaluation of Stroke patients is an instrument for physiotherapists to evaluate motor function and activities in stroke patients. The rating reflects quality as well as quantity of the patient's unassisted performance within three domains: leg, arm and gross function. The inter-rater reliability of the method was studied in a sample of 30 patients admitted to a stroke rehabilitation unit. Three therapists were involved in the study; two therapists assessed the same patient on two consecutive days in a balanced design. Cohen's weighted kappa and McNemar's test of symmetry were used as measures of item reliability, and the intraclass correlation coefficient was used to express the reliability of the sumscores. For 24 out of 32 items the weighted kappa statistic was excellent (0.75-0.98), while 7 items had a kappa statistic within the range 0.53-0.74 (fair to good). The reliability of one item was poor (0.13). The intraclass correlation coefficient for the three sumscores was 0.97, 0.91 and 0.97. We conclude that the Sødring Motor Evaluation of Stroke patients is a reliable measure of motor function in stroke patients undergoing rehabilitation.

  15. The Americleft Speech Project: A Training and Reliability Study.

    PubMed

    Chapman, Kathy L; Baylis, Adriane; Trost-Cardamone, Judith; Cordero, Kelly Nett; Dixon, Angela; Dobbelsteyn, Cindy; Thurmes, Anna; Wilson, Kristina; Harding-Bell, Anne; Sweeney, Triona; Stoddard, Gregory; Sell, Debbie

    2016-01-01

    To describe the results of two reliability studies and to assess the effect of training on interrater reliability scores. The first study (1) examined interrater and intrarater reliability scores (weighted and unweighted kappas) and (2) compared interrater reliability scores before and after training on the use of the Cleft Audit Protocol for Speech-Augmented (CAPS-A) with British English-speaking children. The second study examined interrater and intrarater reliability on a modified version of the CAPS-A (CAPS-A Americleft Modification) with American and Canadian English-speaking children. Finally, comparisons were made between the interrater and intrarater reliability scores obtained for Study 1 and Study 2. The participants were speech-language pathologists from the Americleft Speech Project. In Study 1, interrater reliability scores improved for 6 of the 13 parameters following training on the CAPS-A protocol. Comparison of the reliability results for the two studies indicated lower scores for Study 2 compared with Study 1. However, this appeared to be an artifact of the kappa statistic that occurred due to insufficient variability in the reliability samples for Study 2. When percent agreement scores were also calculated, the ratings appeared similar across Study 1 and Study 2. The findings of this study suggested that improvements in interrater reliability could be obtained following a program of systematic training. However, improvements were not uniform across all parameters. Acceptable levels of reliability were achieved for those parameters most important for evaluation of velopharyngeal function.

  16. Statistical Analysis Tools for Learning in Engineering Laboratories.

    ERIC Educational Resources Information Center

    Maher, Carolyn A.

    1990-01-01

    Described are engineering programs that have used automated data acquisition systems to implement data collection and analyze experiments. Applications include a biochemical engineering laboratory, heat transfer performance, engineering materials testing, mechanical system reliability, statistical control laboratory, thermo-fluid laboratory, and a…

  17. A reliability evaluation methodology for memory chips for space applications when sample size is small

    NASA Technical Reports Server (NTRS)

    Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.

    2003-01-01

    This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.

  18. The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences.

    PubMed

    Hedge, Craig; Powell, Georgina; Sumner, Petroc

    2018-06-01

    Individual differences in cognitive paradigms are increasingly employed to relate cognition to brain structure, chemistry, and function. However, such efforts are often unfruitful, even with the most well established tasks. Here we offer an explanation for failures in the application of robust cognitive paradigms to the study of individual differences. Experimental effects become well established - and thus those tasks become popular - when between-subject variability is low. However, low between-subject variability causes low reliability for individual differences, destroying replicable correlations with other factors and potentially undermining published conclusions drawn from correlational relationships. Though these statistical issues have a long history in psychology, they are widely overlooked in cognitive psychology and neuroscience today. In three studies, we assessed test-retest reliability of seven classic tasks: Eriksen Flanker, Stroop, stop-signal, go/no-go, Posner cueing, Navon, and Spatial-Numerical Association of Response Code (SNARC). Reliabilities ranged from 0 to .82, being surprisingly low for most tasks given their common use. As we predicted, this emerged from low variance between individuals rather than high measurement variance. In other words, the very reason such tasks produce robust and easily replicable experimental effects - low between-participant variability - makes their use as correlational tools problematic. We demonstrate that taking such reliability estimates into account has the potential to qualitatively change theoretical conclusions. The implications of our findings are that well-established approaches in experimental psychology and neuropsychology may not directly translate to the study of individual differences in brain structure, chemistry, and function, and alternative metrics may be required.

  19. Yield of bedrock wells in the Nashoba terrane, central and eastern Massachusetts

    USGS Publications Warehouse

    DeSimone, Leslie A.; Barbaro, Jeffrey R.

    2012-01-01

    The yield of bedrock wells in the fractured-bedrock aquifers of the Nashoba terrane and surrounding area, central and eastern Massachusetts, was investigated with analyses of existing data. Reported well yield was compiled for 7,287 wells from Massachusetts Department of Environmental Protection and U.S. Geological Survey databases. Yield of these wells ranged from 0.04 to 625 gallons per minute. In a comparison with data from 103 supply wells, yield and specific capacity from aquifer tests were well correlated, indicating that reported well yield was a reasonable measure of aquifer characteristics in the study area. Statistically significant relations were determined between well yield and a number of cultural and hydrogeologic factors. Cultural variables included intended water use, well depth, year of construction, and method of yield measurement. Bedrock geology, topography, surficial geology, and proximity to surface waters were statistically significant hydrogeologic factors. Yield of wells was higher in areas of granites, mafic intrusive rocks, and amphibolites than in areas of schists and gneisses or pelitic rocks; higher in valleys and low-slope areas than on hills, ridges, or high slopes; higher in areas overlain by stratified glacial deposits than in areas overlain by till; and higher in close proximity to streams, ponds, and wetlands than at greater distances from these surface-water features. Proximity to mapped faults and to lineaments from aerial photographs also were related to well yield by some measures in three quadrangles in the study area. Although the statistical significance of these relations was high, their predictive power was low, and these relations explained little of the variability in the well-yield data. Similar results were determined from a multivariate regression analysis. Multivariate regression models for the Nashoba terrane and for a three-quadrangle subarea included, as significant variables, many of the cultural and

  20. ``Models'' CAVEAT EMPTOR!!!: ``Toy Models Too-Often Yield Toy-Results''!!!: Statistics, Polls, Politics, Economics, Elections!!!: GRAPH/Network-Physics: ``Equal-Distribution for All'' TRUMP-ED BEC ``Winner-Take-All'' ``Doctor Livingston I Presume?''

    NASA Astrophysics Data System (ADS)

    Preibus-Norquist, R. N. C.-Grover; Bush-Romney, G. W.-Willard-Mitt; Dimon, J. P.; Adelson-Koch, Sheldon-Charles-David-Sheldon; Krugman-Axelrod, Paul-David; Siegel, Edward Carl-Ludwig; D. N. C./O. F. P./''47''%/50% Collaboration; R. N. C./G. O. P./''53''%/49% Collaboration; Nyt/Wp/Cnn/Msnbc/Pbs/Npr/Ft Collaboration; Ftn/Fnc/Fox/Wsj/Fbn Collaboration; Lb/Jpmc/Bs/Boa/Ml/Wamu/S&P/Fitch/Moodys/Nmis Collaboration

    2013-03-01

    ``Models''? CAVEAT EMPTOR!!!: ``Toy Models Too-Often Yield Toy-Results''!!!: Goldenfeld[``The Role of Models in Physics'', in Lects.on Phase-Transitions & R.-G.(92)-p.32-33!!!]: statistics(Silver{[NYTimes; Bensinger, ``Math-Geerks Clearly-Defeated Pundits'', LATimes, (11/9/12)])}, polls, politics, economics, elections!!!: GRAPH/network/net/...-PHYSICS Barabasi-Albert[RMP (02)] (r,t)-space VERSUS(???) [Where's the Inverse/ Dual/Integral-Transform???] (Benjamin)Franklin(1795)-Fourier(1795; 1897;1822)-Laplace(1850)-Mellin (1902) Brillouin(1922)-...(k,)-space, {Hubbard [The World According to Wavelets,Peters (96)-p.14!!!/p.246: refs.-F2!!!]},and then (2) Albert-Barabasi[]Bose-Einstein quantum-statistics(BEQS) Bose-Einstein CONDENSATION (BEC) versus Bianconi[pvt.-comm.; arXiv:cond-mat/0204506; ...] -Barabasi [???] Fermi-Dirac

  1. Linking crop yield anomalies to large-scale atmospheric circulation in Europe.

    PubMed

    Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J

    2017-06-15

    Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.

  2. Yield modeling of acoustic charge transport transversal filters

    NASA Technical Reports Server (NTRS)

    Kenney, J. S.; May, G. S.; Hunt, W. D.

    1995-01-01

    This paper presents a yield model for acoustic charge transport transversal filters. This model differs from previous IC yield models in that it does not assume that individual failures of the nondestructive sensing taps necessarily cause a device failure. A redundancy in the number of taps included in the design is explained. Poisson statistics are used to describe the tap failures, weighted over a uniform defect density distribution. A representative design example is presented. The minimum number of taps needed to realize the filter is calculated, and tap weights for various numbers of redundant taps are calculated. The critical area for device failure is calculated for each level of redundancy. Yield is predicted for a range of defect densities and redundancies. To verify the model, a Monte Carlo simulation is performed on an equivalent circuit model of the device. The results of the yield model are then compared to the Monte Carlo simulation. Better than 95% agreement was obtained for the Poisson model with redundant taps ranging from 30% to 150% over the minimum.

  3. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    NASA Astrophysics Data System (ADS)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  4. Reliability of high-power QCW arrays

    NASA Astrophysics Data System (ADS)

    Feeler, Ryan; Junghans, Jeremy; Remley, Jennifer; Schnurbusch, Don; Stephens, Ed

    2010-02-01

    Northrop Grumman Cutting Edge Optronics has developed a family of arrays for high-power QCW operation. These arrays are built using CTE-matched heat sinks and hard solder in order to maximize the reliability of the devices. A summary of a recent life test is presented in order to quantify the reliability of QCW arrays and associated laser gain modules. A statistical analysis of the raw lifetime data is presented in order to quantify the data in such a way that is useful for laser system designers. The life tests demonstrate the high level of reliability of these arrays in a number of operating regimes. For single-bar arrays, a MTTF of 19.8 billion shots is predicted. For four-bar samples, a MTTF of 14.6 billion shots is predicted. In addition, data representing a large pump source is analyzed and shown to have an expected lifetime of 13.5 billion shots. This corresponds to an expected operational lifetime of greater than ten thousand hours at repetition rates less than 370 Hz.

  5. Agreement, the F-Measure, and Reliability in Information Retrieval

    PubMed Central

    Hripcsak, George; Rothschild, Adam S.

    2005-01-01

    Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the κ statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that κ approaches these measures as the number of negative cases grows large. Positive specific agreement—or the equivalent F-measure—may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies. PMID:15684123

  6. Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine: An overview

    PubMed Central

    Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon; Prasad, Ramjee

    2013-01-01

    Recently, a need to develop supportive new scientific evidence for contemporary Ayurveda has emerged. One of the research objectives is an assessment of the reliability of diagnoses and treatment. Reliability is a quantitative measure of consistency. It is a crucial issue in classification (such as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine diagnoses is in the formative stage. However, reliability studies in Ayurveda are in the preliminary stage. In this paper, examples are provided to illustrate relevant concepts of reliability studies of diagnostic methods and their implication in practice, education, and training. An introduction to reliability estimates and different study designs and statistical analysis is given for future studies in Ayurveda. PMID:23930037

  7. Reliable bonding using indium-based solders

    NASA Astrophysics Data System (ADS)

    Cheong, Jongpil; Goyal, Abhijat; Tadigadapa, Srinivas; Rahn, Christopher

    2004-01-01

    Low temperature bonding techniques with high bond strengths and reliability are required for the fabrication and packaging of MEMS devices. Indium and indium-tin based bonding processes are explored for the fabrication of a flextensional MEMS actuator, which requires the integration of lead zirconate titanate (PZT) substrate with a silicon micromachined structure at low temperatures. The developed technique can be used either for wafer or chip level bonding. The lithographic steps used for the patterning and delineation of the seed layer limit the resolution of this technique. Using this technique, reliable bonds were achieved at a temperature of 200°C. The bonds yielded an average tensile strength of 5.41 MPa and 7.38 MPa for samples using indium and indium-tin alloy solders as the intermediate bonding layers respectively. The bonds (with line width of 100 microns) showed hermetic sealing capability of better than 10-11 mbar-l/s when tested using a commercial helium leak tester.

  8. Reliable bonding using indium-based solders

    NASA Astrophysics Data System (ADS)

    Cheong, Jongpil; Goyal, Abhijat; Tadigadapa, Srinivas; Rahn, Christopher

    2003-12-01

    Low temperature bonding techniques with high bond strengths and reliability are required for the fabrication and packaging of MEMS devices. Indium and indium-tin based bonding processes are explored for the fabrication of a flextensional MEMS actuator, which requires the integration of lead zirconate titanate (PZT) substrate with a silicon micromachined structure at low temperatures. The developed technique can be used either for wafer or chip level bonding. The lithographic steps used for the patterning and delineation of the seed layer limit the resolution of this technique. Using this technique, reliable bonds were achieved at a temperature of 200°C. The bonds yielded an average tensile strength of 5.41 MPa and 7.38 MPa for samples using indium and indium-tin alloy solders as the intermediate bonding layers respectively. The bonds (with line width of 100 microns) showed hermetic sealing capability of better than 10-11 mbar-l/s when tested using a commercial helium leak tester.

  9. Reliability of Computerized Neurocognitive Tests for Concussion Assessment: A Meta-Analysis.

    PubMed

    Farnsworth, James L; Dargo, Lucas; Ragan, Brian G; Kang, Minsoo

    2017-09-01

      Although widely used, computerized neurocognitive tests (CNTs) have been criticized because of low reliability and poor sensitivity. A systematic review was published summarizing the reliability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) scores; however, this was limited to a single CNT. Expansion of the previous review to include additional CNTs and a meta-analysis is needed. Therefore, our purpose was to analyze reliability data for CNTs using meta-analysis and examine moderating factors that may influence reliability.   A systematic literature search (key terms: reliability, computerized neurocognitive test, concussion) of electronic databases (MEDLINE, PubMed, Google Scholar, and SPORTDiscus) was conducted to identify relevant studies.   Studies were included if they met all of the following criteria: used a test-retest design, involved at least 1 CNT, provided sufficient statistical data to allow for effect-size calculation, and were published in English.   Two independent reviewers investigated each article to assess inclusion criteria. Eighteen studies involving 2674 participants were retained. Intraclass correlation coefficients were extracted to calculate effect sizes and determine overall reliability. The Fisher Z transformation adjusted for sampling error associated with averaging correlations. Moderator analyses were conducted to evaluate the effects of the length of the test-retest interval, intraclass correlation coefficient model selection, participant demographics, and study design on reliability. Heterogeneity was evaluated using the Cochran Q statistic.   The proportion of acceptable outcomes was greatest for the Axon Sports CogState Test (75%) and lowest for the ImPACT (25%). Moderator analyses indicated that the type of intraclass correlation coefficient model used significantly influenced effect-size estimates, accounting for 17% of the variation in reliability.   The Axon Sports CogState Test, which

  10. Assessing attitudes towards statistics among medical students: psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS).

    PubMed

    Stanisavljevic, Dejana; Trajkovic, Goran; Marinkovic, Jelena; Bukumiric, Zoran; Cirkovic, Andja; Milic, Natasa

    2014-01-01

    Medical statistics has become important and relevant for future doctors, enabling them to practice evidence based medicine. Recent studies report that students' attitudes towards statistics play an important role in their statistics achievements. The aim of the study was to test the psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS) in order to acquire a valid instrument to measure attitudes inside the Serbian educational context. The validation study was performed on a cohort of 417 medical students who were enrolled in an obligatory introductory statistics course. The SATS adaptation was based on an internationally accepted methodology for translation and cultural adaptation. Psychometric properties of the Serbian version of the SATS were analyzed through the examination of factorial structure and internal consistency. Most medical students held positive attitudes towards statistics. The average total SATS score was above neutral (4.3±0.8), and varied from 1.9 to 6.2. Confirmatory factor analysis validated the six-factor structure of the questionnaire (Affect, Cognitive Competence, Value, Difficulty, Interest and Effort). Values for fit indices TLI (0.940) and CFI (0.961) were above the cut-off of ≥0.90. The RMSEA value of 0.064 (0.051-0.078) was below the suggested value of ≤0.08. Cronbach's alpha of the entire scale was 0.90, indicating scale reliability. In a multivariate regression model, self-rating of ability in mathematics and current grade point average were significantly associated with the total SATS score after adjusting for age and gender. Present study provided the evidence for the appropriate metric properties of the Serbian version of SATS. Confirmatory factor analysis validated the six-factor structure of the scale. The SATS might be reliable and a valid instrument for identifying medical students' attitudes towards statistics in the Serbian educational context.

  11. Observation of the rare decay B{sup +}{yields}K{sup +}{pi}{sup 0}{pi}{sup 0} and measurement of the quasi-two-body contributions B{sup +}{yields}K*(892){sup +}{pi}{sup 0}, B{sup +}{yields}f{sub 0}(980)K{sup +}, and B{sup +}{yields}{chi}{sub c0}K{sup +}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lees, J. P.; Poireau, V.; Tisserand, V.

    We report an analysis of charmless hadronic decays of charged B mesons to the final state K{sup +}{pi}{sup 0}{pi}{sup 0}, using a data sample of (470.9{+-}2.8)x10{sup 6} BB events collected with the BABAR detector at the {Upsilon}(4S) resonance. We observe an excess of signal events, with a significance above 10 standard deviations including systematic uncertainties, and measure the branching fraction and CP asymmetry to be B(B{sup +}{yields}K{sup +}{pi}{sup 0}{pi}{sup 0})=(16.2{+-}1.2{+-}1.5)x10{sup -6} and A{sub CP}(B{sup +}{yields}K{sup +}{pi}{sup 0}{pi}{sup 0})=-0.06{+-}0.06{+-}0.04, where the uncertainties are statistical and systematic, respectively. Additionally, we study the contributions of the B{sup +}{yields}K{sup *}(892){sup +}{pi}{sup 0}, B{sup +}{yields}f{submore » 0}(980)K{sup +}, and B{sup +}{yields}{chi}{sub c0}K{sup +} quasi-two-body decays. We report the world's best measurements of the branching fraction and CP asymmetry of the B{sup +}{yields}K{sup +}{pi}{sup 0}{pi}{sup 0} and B{sup +}{yields}K{sup *}(892){sup +}{pi}{sup 0} channels.« less

  12. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  13. The Infeasibility of Experimental Quantification of Life-Critical Software Reliability

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.

  14. The Americleft Speech Project: A Training and Reliability Study

    PubMed Central

    Chapman, Kathy L.; Baylis, Adriane; Trost-Cardamone, Judith; Cordero, Kelly Nett; Dixon, Angela; Dobbelsteyn, Cindy; Thurmes, Anna; Wilson, Kristina; Harding-Bell, Anne; Sweeney, Triona; Stoddard, Gregory; Sell, Debbie

    2017-01-01

    Objective To describe the results of two reliability studies and to assess the effect of training on interrater reliability scores. Design The first study (1) examined interrater and intrarater reliability scores (weighted and unweighted kappas) and (2) compared interrater reliability scores before and after training on the use of the Cleft Audit Protocol for Speech–Augmented (CAPS-A) with British English-speaking children. The second study examined interrater and intrarater reliability on a modified version of the CAPS-A (CAPS-A Americleft Modification) with American and Canadian English-speaking children. Finally, comparisons were made between the interrater and intrarater reliability scores obtained for Study 1 and Study 2. Participants The participants were speech-language pathologists from the Americleft Speech Project. Results In Study 1, interrater reliability scores improved for 6 of the 13 parameters following training on the CAPS-A protocol. Comparison of the reliability results for the two studies indicated lower scores for Study 2 compared with Study 1. However, this appeared to be an artifact of the kappa statistic that occurred due to insufficient variability in the reliability samples for Study 2. When percent agreement scores were also calculated, the ratings appeared similar across Study 1 and Study 2. Conclusion The findings of this study suggested that improvements in interrater reliability could be obtained following a program of systematic training. However, improvements were not uniform across all parameters. Acceptable levels of reliability were achieved for those parameters most important for evaluation of velopharyngeal function. PMID:25531738

  15. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  16. Modeling, implementation, and validation of arterial travel time reliability : [summary].

    DOT National Transportation Integrated Search

    2013-11-01

    Travel time reliability (TTR) has been proposed as : a better measure of a facilitys performance than : a statistical measure like peak hour demand. TTR : is based on more information about average traffic : flows and longer time periods, thus inc...

  17. How Reliable is Bayesian Model Averaging Under Noisy Data? Statistical Assessment and Implications for Robust Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang

    2014-05-01

    Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate

  18. Network-based statistical comparison of citation topology of bibliographic databases

    PubMed Central

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231

  19. [Reliability and validity of the modified Perceived Health Competence Scale (PHCS) Japanese version].

    PubMed

    Togari, Taisuke; Yamazaki, Yoshihiko; Koide, Syotaro; Miyata, Ayako

    2006-01-01

    In community and workplace health plans, the Perceived Health Competence Scale (PHCS) is employed as an index of health competency. The purpose of this research was to examine the reliability and validity of a modified Japanese PHCS. Interviews were sought with 3,000 randomly selected Japanese individuals using a two-step stratified method. Valid PHCS responses were obtained from 1,910 individuals, yielding a 63.7% response rate. Reliability was assessed using Cronbach's alpha coefficient (henceforth, alpha) to evaluate internal consistency, and by employing item-total correlation and alpha coefficient analyses to assess the effect of removal of variables from the model. To examine content validity, we assessed the correlation between the PHCS score and four respondent attribute characteristics, that is, sex, age, the presence of chronic disease, and the existence of chronic disease at age 18. The correlation between PHCS score and commonly employed healthy lifestyle indices was examined to assess construct validity. General linear model statistical analysis was employed. The modified Japanese PHCS demonstrated a satisfactory alpha coefficient of 0.869. Moreover, reliability was confirmed by item-total correlation and alpha coefficient analyses after removal of variables from the model. Differences in PHCS scores were seen between individuals 60 years and older, and younger individuals. These with current chronic disease, or who had had a chronic disease at age 18, tended to have lower PHCS scores. After controlling for the presence of current or age 18 chronic disease, age, and sex, significant correlations were seen between PHCS scores and tobacco use, dietary habits, and exercise, but not alcohol use or frequency of medical consultation. This study supports the reliability and validity, and hence supports the use, of the modified Japanese PHCS. Future longitudinal research is needed to evaluate the predictive power of modified Japanese PHCS scores, to examine

  20. The Attenuation of Correlation Coefficients: A Statistical Literacy Issue

    ERIC Educational Resources Information Center

    Trafimow, David

    2016-01-01

    Much of the science reported in the media depends on correlation coefficients. But the size of correlation coefficients depends, in part, on the reliability with which the correlated variables are measured. Understanding this is a statistical literacy issue.

  1. Benefits of seasonal forecasts of crop yields

    NASA Astrophysics Data System (ADS)

    Sakurai, G.; Okada, M.; Nishimori, M.; Yokozawa, M.

    2017-12-01

    Major factors behind recent fluctuations in food prices include increased biofuel production and oil price fluctuations. In addition, several extreme climate events that reduced worldwide food production coincided with upward spikes in food prices. The stabilization of crop yields is one of the most important tasks to stabilize food prices and thereby enhance food security. Recent development of technologies related to crop modeling and seasonal weather forecasting has made it possible to forecast future crop yields for maize and soybean. However, the effective use of these technologies remains limited. Here we present the potential benefits of seasonal crop-yield forecasts on a global scale for choice of planting day. For this purpose, we used a model (PRYSBI-2) that can well replicate past crop yields both for maize and soybean. This model system uses a Bayesian statistical approach to estimate the parameters of a basic process-based model of crop growth. The spatial variability of model parameters was considered by estimating the posterior distribution of the parameters from historical yield data by using the Markov-chain Monte Carlo (MCMC) method with a resolution of 1.125° × 1.125°. The posterior distributions of model parameters were estimated for each spatial grid with 30 000 MCMC steps of 10 chains each. By using this model and the estimated parameter distributions, we were able to estimate not only crop yield but also levels of associated uncertainty. We found that the global average crop yield increased about 30% as the result of the optimal selection of planting day and that the seasonal forecast of crop yield had a large benefit in and near the eastern part of Brazil and India for maize and the northern area of China for soybean. In these countries, the effects of El Niño and Indian Ocean dipole are large. The results highlight the importance of developing a system to forecast global crop yields.

  2. Modelling crop yield in Iberia under drought conditions

    NASA Astrophysics Data System (ADS)

    Ribeiro, Andreia; Páscoa, Patrícia; Russo, Ana; Gouveia, Célia

    2017-04-01

    The improved assessment of the cereal yield and crop loss under drought conditions are essential to meet the increasing economy demands. The growing frequency and severity of the extreme drought conditions in the Iberian Peninsula (IP) has been likely responsible for negative impacts on agriculture, namely on crop yield losses. Therefore, a continuous monitoring of vegetation activity and a reliable estimation of drought impacts is crucial to contribute for the agricultural drought management and development of suitable information tools. This works aims to assess the influence of drought conditions in agricultural yields over the IP, considering cereal yields from mainly rainfed agriculture for the provinces with higher productivity. The main target is to develop a strategy to model drought risk on agriculture for wheat yield at a province level. In order to achieve this goal a combined assessment was made using a drought indicator (Standardized Precipitation Evapotranspiration Index, SPEI) to evaluate drought conditions together with a widely used vegetation index (Normalized Difference Vegetation Index, NDVI) to monitor vegetation activity. A correlation analysis between detrended wheat yield and SPEI was performed in order to assess the vegetation response to each time scale of drought occurrence and also identify the moment of the vegetative cycle when the crop yields are more vulnerable to drought conditions. The time scales and months of SPEI, together with the months of NDVI, better related with wheat yield were chosen to perform a multivariate regression analysis to simulate crop yield. Model results are satisfactory and highlighted the usefulness of such analysis in the framework of developing a drought risk model for crop yields. In terms of an operational point of view, the results aim to contribute to an improved understanding of crop yield management under dry conditions, particularly adding substantial information on the advantages of combining

  3. Confirmatory Factor Analysis of the Malay Version of the Confusion, Hubbub and Order Scale (CHAOS-6) among Myocardial Infarction Survivors in a Malaysian Cardiac Healthcare Facility.

    PubMed

    Ganasegeran, Kurubaran; Selvaraj, Kamaraj; Rashid, Abdul

    2017-08-01

    The six item Confusion, Hubbub and Order Scale (CHAOS-6) has been validated as a reliable tool to measure levels of household disorder. We aimed to investigate the goodness of fit and reliability of a new Malay version of the CHAOS-6. The original English version of the CHAOS-6 underwent forward-backward translation into the Malay language. The finalised Malay version was administered to 105 myocardial infarction survivors in a Malaysian cardiac health facility. We performed confirmatory factor analyses (CFAs) using structural equation modelling. A path diagram and fit statistics were yielded to determine the Malay version's validity. Composite reliability was tested to determine the scale's reliability. All 105 myocardial infarction survivors participated in the study. The CFA yielded a six-item, one-factor model with excellent fit statistics. Composite reliability for the single factor CHAOS-6 was 0.65, confirming that the scale is reliable for Malay speakers. The Malay version of the CHAOS-6 was reliable and showed the best fit statistics for our study sample. We thus offer a simple, brief, validated, reliable and novel instrument to measure chaos, the Skala Kecelaruan, Keriuhan & Tertib Terubahsuai (CHAOS-6) , for the Malaysian population.

  4. Spatial Correlations in Natural Scenes Modulate Response Reliability in Mouse Visual Cortex

    PubMed Central

    Rikhye, Rajeev V.

    2015-01-01

    Intrinsic neuronal variability significantly limits information encoding in the primary visual cortex (V1). Certain stimuli can suppress this intertrial variability to increase the reliability of neuronal responses. In particular, responses to natural scenes, which have broadband spatiotemporal statistics, are more reliable than responses to stimuli such as gratings. However, very little is known about which stimulus statistics modulate reliable coding and how this occurs at the neural ensemble level. Here, we sought to elucidate the role that spatial correlations in natural scenes play in reliable coding. We developed a novel noise-masking method to systematically alter spatial correlations in natural movies, without altering their edge structure. Using high-speed two-photon calcium imaging in vivo, we found that responses in mouse V1 were much less reliable at both the single neuron and population level when spatial correlations were removed from the image. This change in reliability was due to a reorganization of between-neuron correlations. Strongly correlated neurons formed ensembles that reliably and accurately encoded visual stimuli, whereas reducing spatial correlations reduced the activation of these ensembles, leading to an unreliable code. Together with an ensemble-specific normalization model, these results suggest that the coordinated activation of specific subsets of neurons underlies the reliable coding of natural scenes. SIGNIFICANCE STATEMENT The natural environment is rich with information. To process this information with high fidelity, V1 neurons have to be robust to noise and, consequentially, must generate responses that are reliable from trial to trial. While several studies have hinted that both stimulus attributes and population coding may reduce noise, the details remain unclear. Specifically, what features of natural scenes are important and how do they modulate reliability? This study is the first to investigate the role of spatial

  5. Hyperspectral sensing to detect the impact of herbicide drift on cotton growth and yield

    NASA Astrophysics Data System (ADS)

    Suarez, L. A.; Apan, A.; Werth, J.

    2016-10-01

    Yield loss in crops is often associated with plant disease or external factors such as environment, water supply and nutrient availability. Improper agricultural practices can also introduce risks into the equation. Herbicide drift can be a combination of improper practices and environmental conditions which can create a potential yield loss. As traditional assessment of plant damage is often imprecise and time consuming, the ability of remote and proximal sensing techniques to monitor various bio-chemical alterations in the plant may offer a faster, non-destructive and reliable approach to predict yield loss caused by herbicide drift. This paper examines the prediction capabilities of partial least squares regression (PLS-R) models for estimating yield. Models were constructed with hyperspectral data of a cotton crop sprayed with three simulated doses of the phenoxy herbicide 2,4-D at three different growth stages. Fibre quality, photosynthesis, conductance, and two main hormones, indole acetic acid (IAA) and abscisic acid (ABA) were also analysed. Except for fibre quality and ABA, Spearman correlations have shown that these variables were highly affected by the chemical. Four PLS-R models for predicting yield were developed according to four timings of data collection: 2, 7, 14 and 28 days after the exposure (DAE). As indicated by the model performance, the analysis revealed that 7 DAE was the best time for data collection purposes (RMSEP = 2.6 and R2 = 0.88), followed by 28 DAE (RMSEP = 3.2 and R2 = 0.84). In summary, the results of this study show that it is possible to accurately predict yield after a simulated herbicide drift of 2,4-D on a cotton crop, through the analysis of hyperspectral data, thereby providing a reliable, effective and non-destructive alternative based on the internal response of the cotton leaves.

  6. Vulnerability of Agriculture to Climate Change as Revealed by Relationships between Simulated Crop Yield and Climate Change Indices

    NASA Astrophysics Data System (ADS)

    King, A. W.; Absar, S. M.; Nair, S.; Preston, B. L.

    2012-12-01

    The vulnerability of agriculture is among the leading concerns surrounding climate change. Agricultural production is influenced by drought and other extremes in weather and climate. In regions of subsistence farming, worst case reductions in yield lead to malnutrition and famine. Reduced surplus contributes to poverty in agrarian economies. In more economically diverse and industrialized regions, variations in agricultural yield can influence the regional economy through market mechanisms. The latter grows in importance as agriculture increasingly services the energy market in addition to markets for food and fiber. Agriculture is historically a highly adaptive enterprise and will respond to future changes in climate with a variety of adaptive mechanisms. Nonetheless, the risk, if not expectation, of increases in climate extremes and hazards exceeding historical experience motivates scientifically based anticipatory assessment of the vulnerability of agriculture to climate change. We investigate the sensitivity component of that vulnerability using EPIC, a well established field-scale model of cropping systems that includes the simulation of economic yield. The core of our analysis is the relationship between simulated yield and various indices of climate change, including the CCI/CLIVAR/JCOM ETCCDI indices, calculated from weather inputs to the model. We complement this core with analysis using the DSSAT cropping system model and exploration of relationships between historical yield statistics and climate indices calculated from weather records. Our analyses are for sites in the Southeast/Gulf Coast region of the United States. We do find "tight" monotonic relationships between annual yield and climate for some indices, especially those associated with available water. More commonly, however, we find an increase in the variability of yield as the index value becomes more extreme. Our findings contribute to understanding the sensitivity of crop yield as part of

  7. Prediction of kharif rice yield at Kharagpur using disaggregated extended range rainfall forecasts

    NASA Astrophysics Data System (ADS)

    Dhekale, B. S.; Nageswararao, M. M.; Nair, Archana; Mohanty, U. C.; Swain, D. K.; Singh, K. K.; Arunbabu, T.

    2017-08-01

    The Extended Range Forecasts System (ERFS) has been generating monthly and seasonal forecasts on real-time basis throughout the year over India since 2009. India is one of the major rice producer and consumer in South Asia; more than 50% of the Indian population depends on rice as staple food. Rice is mainly grown in kharif season, which contributed 84% of the total annual rice production of the country. Rice cultivation in India is rainfed, which depends largely on rains, so reliability of the rainfall forecast plays a crucial role for planning the kharif rice crop. In the present study, an attempt has been made to test the reliability of seasonal and sub-seasonal ERFS summer monsoon rainfall forecasts for kharif rice yield predictions at Kharagpur, West Bengal by using CERES-Rice (DSSATv4.5) model. These ERFS forecasts are produced as monthly and seasonal mean values and are converted into daily sequences with stochastic weather generators for use with crop growth models. The daily sequences are generated from ERFS seasonal (June-September) and sub-seasonal (July-September, August-September, and September) summer monsoon (June to September) rainfall forecasts which are considered as input in CERES-rice crop simulation model for the crop yield prediction for hindcast (1985-2008) and real-time mode (2009-2015). The yield simulated using India Meteorological Department (IMD) observed daily rainfall data is considered as baseline yield for evaluating the performance of predicted yields using the ERFS forecasts. The findings revealed that the stochastic disaggregation can be used to disaggregate the monthly/seasonal ERFS forecasts into daily sequences. The year to year variability in rice yield at Kharagpur is efficiently predicted by using the ERFS forecast products in hindcast as well as real time, and significant enhancement in the prediction skill is noticed with advancement in the season due to incorporation of observed weather data which reduces uncertainty of

  8. Spatial cue reliability drives frequency tuning in the barn Owl's midbrain

    PubMed Central

    Cazettes, Fanny; Fischer, Brian J; Pena, Jose L

    2014-01-01

    The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability. DOI: http://dx.doi.org/10.7554/eLife.04854.001 PMID:25531067

  9. Statistical analysis and yield management in LED design through TCAD device simulation

    NASA Astrophysics Data System (ADS)

    Létay, Gergö; Ng, Wei-Choon; Schneider, Lutz; Bregy, Adrian; Pfeiffer, Michael

    2007-02-01

    This paper illustrates how technology computer-aided design (TCAD), which nowadays is an essential part of CMOS technology, can be applied to LED development and manufacturing. In the first part, the essential electrical and optical models inherent to LED modeling are reviewed. The second part of the work describes a methodology to improve the efficiency of the simulation procedure by using the concept of process compact models (PCMs). The last part demonstrates the capabilities of PCMs using an example of a blue InGaN LED. In particular, a parameter screening is performed to find the most important parameters, an optimization task incorporating the robustness of the design is carried out, and finally the impact of manufacturing tolerances on yield is investigated. It is indicated how the concept of PCMs can contribute to an efficient design for manufacturing DFM-aware development.

  10. Can bias correction and statistical downscaling methods improve the skill of seasonal precipitation forecasts?

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.

    2018-02-01

    Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.

  11. Airborne and ground-based remote sensing for the estimation of evapotranspiration and yield of bean, potato, and sugar beet crops

    NASA Astrophysics Data System (ADS)

    Jayanthi, Harikishan

    The focus of this research was two-fold: (1) extend the reflectance-based crop coefficient approach to non-grain (potato and sugar beet), and vegetable crops (bean), and (2) develop vegetation index (VI)-yield statistical models for potato and sugar beet crops using high-resolution aerial multispectral imagery. Extensive crop biophysical sampling (leaf area index and aboveground dry biomass sampling) and canopy reflectance measurements formed the backbone of developing of canopy reflectance-based crop coefficients for bean, potato, and sugar beet crops in this study. Reflectance-based crop coefficient equations were developed for the study crops cultivated in Kimberly, Idaho, and subsequently used in water availability simulations in the plant root zone during 1998 and 1999 seasons. The simulated soil water profiles were compared with independent measurements of actual soil water profiles in the crop root zone in selected fields. It is concluded that the canopy reflectance-based crop coefficient technique can be successfully extended to non-grain crops as well. While the traditional basal crop coefficients generally expect uniform growth in a region the reflectance-based crop coefficients represent the actual crop growth pattern (in less than ideal water availability conditions) in individual fields. Literature on crop canopy interactions with sunlight states that there is a definite correspondence between leaf area index progression in the season and the final yield. In case of crops like potato and sugar beet, the yield is influenced not only on how early and how quickly the crop establishes its canopy but also on how long the plant stands on the ground in a healthy state. The integrated area under the crop growth curve has shown excellent correlations with hand-dug samples of potato and sugar beet crops in this research. Soil adjusted vegetation index-yield models were developed, and validated using multispectral aerial imagery. Estimated yield images were

  12. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  13. The reliability of the Glasgow Coma Scale: a systematic review.

    PubMed

    Reith, Florence C M; Van den Brande, Ruben; Synnot, Anneliese; Gruen, Russell; Maas, Andrew I R

    2016-01-01

    The Glasgow Coma Scale (GCS) provides a structured method for assessment of the level of consciousness. Its derived sum score is applied in research and adopted in intensive care unit scoring systems. Controversy exists on the reliability of the GCS. The aim of this systematic review was to summarize evidence on the reliability of the GCS. A literature search was undertaken in MEDLINE, EMBASE and CINAHL. Observational studies that assessed the reliability of the GCS, expressed by a statistical measure, were included. Methodological quality was evaluated with the consensus-based standards for the selection of health measurement instruments checklist and its influence on results considered. Reliability estimates were synthesized narratively. We identified 52 relevant studies that showed significant heterogeneity in the type of reliability estimates used, patients studied, setting and characteristics of observers. Methodological quality was good (n = 7), fair (n = 18) or poor (n = 27). In good quality studies, kappa values were ≥0.6 in 85%, and all intraclass correlation coefficients indicated excellent reliability. Poor quality studies showed lower reliability estimates. Reliability for the GCS components was higher than for the sum score. Factors that may influence reliability include education and training, the level of consciousness and type of stimuli used. Only 13% of studies were of good quality and inconsistency in reported reliability estimates was found. Although the reliability was adequate in good quality studies, further improvement is desirable. From a methodological perspective, the quality of reliability studies needs to be improved. From a clinical perspective, a renewed focus on training/education and standardization of assessment is required.

  14. Principles of Statistics: What the Sports Medicine Professional Needs to Know.

    PubMed

    Riemann, Bryan L; Lininger, Monica R

    2018-07-01

    Understanding the results and statistics reported in original research remains a large challenge for many sports medicine practitioners and, in turn, may be among one of the biggest barriers to integrating research into sports medicine practice. The purpose of this article is to provide minimal essentials a sports medicine practitioner needs to know about interpreting statistics and research results to facilitate the incorporation of the latest evidence into practice. Topics covered include the difference between statistical significance and clinical meaningfulness; effect sizes and confidence intervals; reliability statistics, including the minimal detectable difference and minimal important difference; and statistical power. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  16. Applied Behavior Analysis and Statistical Process Control?

    ERIC Educational Resources Information Center

    Hopkins, B. L.

    1995-01-01

    Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…

  17. Statistical Analysis Experiment for Freshman Chemistry Lab.

    ERIC Educational Resources Information Center

    Salzsieder, John C.

    1995-01-01

    Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…

  18. Statistics Education Research in Malaysia and the Philippines: A Comparative Analysis

    ERIC Educational Resources Information Center

    Reston, Enriqueta; Krishnan, Saras; Idris, Noraini

    2014-01-01

    This paper presents a comparative analysis of statistics education research in Malaysia and the Philippines by modes of dissemination, research areas, and trends. An electronic search for published research papers in the area of statistics education from 2000-2012 yielded 20 for Malaysia and 19 for the Philippines. Analysis of these papers showed…

  19. [How reliable is the monitoring for doping?].

    PubMed

    Hüsler, J

    1990-12-01

    The reliability of the dope control, of the chemical analysis of the urine probes in the accredited laboratories and their decisions, is discussed using probabilistic and statistical methods. Basically, we evaluated and estimated the positive predictive value which means the probability that an urine probe contains prohibited dope substances given a positive test decision. Since there are not statistical data and evidence for some important quantities in relation to the predictive value, an exact evaluation is not possible, only conservative, lower bounds can be given. We found that the predictive value is at least 90% or 95% with respect to the analysis and decision based on the A-probe only, and at least 99% with respect to both A- and B-probes. A more realistic observation, but without sufficient statistical confidence, points to the fact that the true predictive value is significantly larger than these lower estimates.

  20. Calculated fission-fragment yield systematics in the region 74 <=Z <= 94 and 90 <=N <= 150

    DOE PAGES

    Möller, Peter; Randrup, Jørgen

    2015-04-01

    aspects of our results in terms of “nuclear-chart” plots showing calculated degrees of asymmetry versus N and Z. Conclusions: Experimental data in this region are rare: only ten or so yield distributions have been measured, some with very limited statistics. We agree with several measurements with higher statistics. Regions where there might be differences between our calculated results and measurements lie near the calculated transition line between symmetric and asymmetric fission. To draw more definite conclusions about the accuracy of the present implementation of the Brownian shape-motion approach in this region experimental data, with reliable statistics, for a fair number of suitably located additional nuclides are clearly needed. Because the nuclear potential-energy structure is so different in this region compared to the actinide region, additional experimental data together with fission theory studies that incorporate additional, dynamical aspects should provide much new insight.« less

  1. Factors related to well yield in the fractured-bedrock aquifer of New Hampshire

    USGS Publications Warehouse

    Moore, Richard Bridge; Schwartz, Gregory E.; Clark, Stewart F.; Walsh, Gregory J.; Degnan, James R.

    2002-01-01

    The New Hampshire Bedrock Aquifer Assessment was designed to provide information that can be used by communities, industry, professional consultants, and other interests to evaluate the ground-water development potential of the fractured-bedrock aquifer in the State. The assessment was done at statewide, regional, and well field scales to identify relations that potentially could increase the success in locating high-yield water supplies in the fractured-bedrock aquifer. statewide, data were collected for well construction and yield information, bedrock lithology, surficial geology, lineaments, topography, and various derivatives of these basic data sets. Regionally, geologic, fracture, and lineament data were collected for the Pinardville and Windham quadrangles in New Hampshire. The regional scale of the study examined the degree to which predictive well-yield relations, developed as part of the statewide reconnaissance investigation, could be improved by use of quadrangle-scale geologic mapping. Beginning in 1984, water-well contractors in the State were required to report detailed information on newly constructed wells to the New Hampshire Department of Environmental Services (NHDES). The reports contain basic data on well construction, including six characteristics used in this study?well yield, well depth, well use, method of construction, date drilled, and depth to bedrock (or length of casing). The NHDES has determined accurate georeferenced locations for more than 20,000 wells reported since 1984. The availability of this large data set provided an opportunity for a statistical analysis of bedrock-well yields. Well yields in the database ranged from zero to greater than 500 gallons per minute (gal/min). Multivariate regression was used as the primary statistical method of analysis because it is the most efficient tool for predicting a single variable with many potentially independent variables. The dependent variable that was explored in this study was the

  2. Genetic correlations between the cumulative pseudo-survival rate, milk yield, and somatic cell score during lactation in Holstein cattle in Japan using a random regression model.

    PubMed

    Sasaki, O; Aihara, M; Nishiura, A; Takeda, H

    2017-09-01

    Trends in genetic correlations between longevity, milk yield, and somatic cell score (SCS) during lactation in cows are difficult to trace. In this study, changes in the genetic correlations between milk yield, SCS, and cumulative pseudo-survival rate (PSR) during lactation were examined, and the effect of milk yield and SCS information on the reliability of estimated breeding value (EBV) of PSR were determined. Test day milk yield, SCS, and PSR records were obtained for Holstein cows in Japan from 2004 to 2013. A random subset of the data was used for the analysis (825 herds, 205,383 cows). This data set was randomly divided into 5 subsets (162-168 herds, 83,389-95,854 cows), and genetic parameters were estimated in each subset independently. Data were analyzed using multiple-trait random regression animal models including either the residual effect for the whole lactation period (H0), the residual effects for 5 lactation stages (H5), or both of these residual effects (HD). Milk yield heritability increased until 310 to 351 d in milk (DIM) and SCS heritability increased until 330 to 344 DIM. Heritability estimates for PSR increased with DIM from 0.00 to 0.05. The genetic correlation between milk yield and SCS increased negatively to under -0.60 at 455 DIM. The genetic correlation between milk yield and PSR increased until 342 to 355 DIM (0.53-0.57). The genetic correlation between the SCS and PSR was -0.82 to -0.83 at around 180 DIM, and decreased to -0.65 to -0.71 at 455 DIM. The reliability of EBV of PSR for sires with 30 or more recorded daughters was 0.17 to 0.45 when the effects of correlated traits were ignored. The maximum reliability of EBV was observed at 257 (H0) or 322 (HD) DIM. When the correlations of PSR with milk yield and SCS were considered, the reliabilities of PSR estimates increased to 0.31-0.76. The genetic parameter estimates of H5 were the same as those for HD. The rank correlation coefficients of the EBV of PSR between H0 and H5 or HD were

  3. Interhemispheric Inhibition Measurement Reliability in Stroke: A Pilot Study

    PubMed Central

    Cassidy, Jessica M.; Chu, Haitao; Chen, Mo; Kimberley, Teresa J.; Carey, James R.

    2016-01-01

    Objective Reliable transcranial magnetic stimulation (TMS) measures for probing corticomotor excitability are important when assessing the physiological effects of non-invasive brain stimulation. The primary objective of this study was to examine test-retest reliability of an interhemispheric inhibition (IHI) index measurement in stroke. Materials and Methods Ten subjects with chronic stroke (≥ 6 months) completed two IHI testing sessions per week for three weeks (six testing sessions total). A single investigator measured IHI in the contra- to-ipsilesional primary motor cortex direction and in the opposite direction using bilateral paired-pulse TMS. Weekly sessions were separated by 24 hours with a 1-week washout period separating testing weeks. To determine if motor-evoked potential (MEP) quantification method affected measurement reliability, IHI indices computed from both MEP amplitude and area responses were found. Reliability was assessed with two-way, mixed intraclass correlation coefficients (ICC(3,k)). Standard error of measurement and minimal detectable difference statistics were also determined. Results With the exception of the initial testing week, IHI indices measured in the contra-to-ipsilesional hemisphere direction demonstrated moderate to excellent reliability (ICC = 0.725 – 0.913). Ipsi-to-contralesional IHI indices depicted poor or invalid reliability estimates throughout the three-week testing duration (ICC= −1.153 – 0.105). The overlap of ICC 95% confidence intervals suggested that IHI indices using MEP amplitude vs. area measures did not differ with respect to reliability. Conclusions IHI indices demonstrated varying magnitudes of reliability irrespective of MEP quantification method. Several strategies for improving IHI index measurement reliability are discussed. PMID:27333364

  4. Small numbers, disclosure risk, security, and reliability issues in Web-based data query systems.

    PubMed

    Rudolph, Barbara A; Shah, Gulzar H; Love, Denise

    2006-01-01

    This article describes the process for developing consensus guidelines and tools for releasing public health data via the Web and highlights approaches leading agencies have taken to balance disclosure risk with public dissemination of reliable health statistics. An agency's choice of statistical methods for improving the reliability of released data for Web-based query systems is based upon a number of factors, including query system design (dynamic analysis vs preaggregated data and tables), population size, cell size, data use, and how data will be supplied to users. The article also describes those efforts that are necessary to reduce the risk of disclosure of an individual's protected health information.

  5. Development of a European Ensemble System for Seasonal Prediction: Application to crop yield

    NASA Astrophysics Data System (ADS)

    Terres, J. M.; Cantelaube, P.

    2003-04-01

    Western European agriculture is highly intensive and the weather is the main source of uncertainty for crop yield assessment and for crop management. In the current system, at the time when a crop yield forecast is issued, the weather conditions leading up to harvest time are unknown and are therefore a major source of uncertainty. The use of seasonal weather forecast would bring additional information for the remaining crop season and has valuable benefit for improving the management of agricultural markets and environmentally sustainable farm practices. An innovative method for supplying seasonal forecast information to crop simulation models has been developed in the frame of the EU funded research project DEMETER. It consists in running a crop model on each individual member of the seasonal hindcasts to derive a probability distribution of crop yield. Preliminary results of cumulative probability function of wheat yield provides information on both the yield anomaly and the reliability of the forecast. Based on the spread of the probability distribution, the end-user can directly quantify the benefits and risks of taking weather-sensitive decisions.

  6. Statistical crystallography of surface micelle spacing

    NASA Technical Reports Server (NTRS)

    Noever, David A.

    1992-01-01

    The aggregation of the recently reported surface micelles of block polyelectrolytes is analyzed using techniques of statistical crystallography. A polygonal lattice (Voronoi mosaic) connects center-to-center points, yielding statistical agreement with crystallographic predictions; Aboav-Weaire's law and Lewis's law are verified. This protocol supplements the standard analysis of surface micelles leading to aggregation number determination and, when compared to numerical simulations, allows further insight into the random partitioning of surface films. In particular, agreement with Lewis's law has been linked to the geometric packing requirements of filling two-dimensional space which compete with (or balance) physical forces such as interfacial tension, electrostatic repulsion, and van der Waals attraction.

  7. The role of climatic variables in winter cereal yields: a retrospective analysis.

    PubMed

    Luo, Qunying; Wen, Li

    2015-02-01

    This study examined the effects of observed climate including [CO2] on winter cereal [winter wheat (Triticum aestivum), barley (Hordeum vulgare) and oat (Avena sativa)] yields by adopting robust statistical analysis/modelling approaches (i.e. autoregressive fractionally integrated moving average, generalised addition model) based on long time series of historical climate data and cereal yield data at three locations (Moree, Dubbo and Wagga Wagga) in New South Wales, Australia. Research results show that (1) growing season rainfall was significantly, positively and non-linearly correlated with crop yield at all locations considered; (2) [CO2] was significantly, positively and non-linearly correlated with crop yields in all cases except wheat and barley yields at Wagga Wagga; (3) growing season maximum temperature was significantly, negatively and non-linearly correlated with crop yields at Dubbo and Moree (except for barley); and (4) radiation was only significantly correlated with oat yield at Wagga Wagga. This information will help to identify appropriate management adaptation options in dealing with the risk and in taking the opportunities of climate change.

  8. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  9. The impact exploration of agricultural drought on winter wheat yield in the North China Plain

    NASA Astrophysics Data System (ADS)

    Yang, Jianhua; Wu, Jianjun; Han, Xinyi; Zhou, Hongkui

    2017-04-01

    Drought is one of the most serious agro-climatic disasters in the North China Plain, which has a great influence on winter wheat yield. Global warming exacerbates the drought trend of this region, so it is important to study the effect of drought on winter wheat yield. In order to assess the drought-induced winter wheat yield losses, SPEI (standardized precipitation evapotranspiration index), the widely used drought index, was selected to quantify the drought from 1981 to 2013. Additionally, the EPIC (Environmental Policy Integrated Climate) crop model was used to simulate winter wheat yield at 47 stations in this region from 1981 to 2013. We analyzed the relationship between winter wheat yield and the SPEI at different time scales in each month during the growing season. The trends of the SPEI and the trends of winter wheat yield at 47 stations over the past 32 years were compared with each other. To further quantify the effect of drought on winter wheat yield, we defined the year that SPEI varied from -0.5 to 0.5 as the normal year, and calculated the average winter wheat yield of the normal years as a reference yield, then calculated the reduction ratios of winter wheat based on the yields mentioned above in severe drought years. As a reference, we compared the results with the reduction ratios calculated from the statistical yield data. The results showed that the 9 to 12-month scales' SPEI in April, May and June had a high correlation with winter wheat yield. The trends of the SPEI and the trends of winter wheat yield over the past 32 years showed a positive correlation (p<0.01) and have similar spatial distributions. The proportion of the stations with the same change trend between the SPEI and winter wheat yield was 70%, indicating that drought was the main factor leading to a decline in winter wheat yield in this region. The reduction ratios based on the simulated yield and the reduction ratios calculated from the statistical yield data have a high positive

  10. The Reliability of a Novel Mobile 3-dimensional Wound Measurement Device.

    PubMed

    Anghel, Ersilia L; Kumar, Anagha; Bigham, Thomas E; Maselli, Kathryn M; Steinberg, John S; Evans, Karen K; Kim, Paul J; Attinger, Christopher E

    2016-11-01

    Objective assessment of wound dimensions is essential for tracking progression and determining treatment effectiveness. A reliability study was designed to establish intrarater and interrater reliability of a novel mobile 3-dimensional wound measurement (3DWM) device. Forty-five wounds were assessed by 2 raters using a 3DWM device to obtain length, width, area, depth, and volume measurements. Wounds were also measured manually, using a disposable ruler and digital planimetry. The intraclass correlation coefficient (ICC) was used to establish intrarater and interrater reliability. High levels of intrarater and interrater agreement were observed for area, length, and width; ICC = 0.998, 0.977, 0.955 and 0.999, 0.997, 0.995, respectively. Moderate levels of intrarater (ICC = 0.888) and interrater (ICC = 0.696) agreement were observed for volume. Lastly, depth yielded an intrarater ICC of 0.360 and an interrater ICC of 0.649. Measures from the 3DWM device were highly correlated with those obtained from scaled photography for length, width, and area (ρ = 0.997, 0.988, 0.997, P < 0.001). The 3DWM device yielded correlations of ρ = 0.990, 0.987, 0.996 with P < 0.001 for length, width, and area when compared to manual measurements. The 3DWM device was found to be highly reliable for measuring wound areas for a range of wound sizes and types as compared to manual measurement and digital planimetry. The depth and therefore volume measurement using the 3DWM device was found to have a lower ICC, but volume ICC alone was moderate. Overall, this device offers a mobile option for objective wound measurement in the clinical setting.

  11. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  12. Identifying reliable independent components via split-half comparisons

    PubMed Central

    Groppe, David M.; Makeig, Scott; Kutas, Marta

    2011-01-01

    Independent component analysis (ICA) is a family of unsupervised learning algorithms that have proven useful for the analysis of the electroencephalogram (EEG) and magnetoencephalogram (MEG). ICA decomposes an EEG/MEG data set into a basis of maximally temporally independent components (ICs) that are learned from the data. As with any statistic, a concern with using ICA is the degree to which the estimated ICs are reliable. An IC may not be reliable if ICA was trained on insufficient data, if ICA training was stopped prematurely or at a local minimum (for some algorithms), or if multiple global minima were present. Consequently, evidence of ICA reliability is critical for the credibility of ICA results. In this paper, we present a new algorithm for assessing the reliability of ICs based on applying ICA separately to split-halves of a data set. This algorithm improves upon existing methods in that it considers both IC scalp topographies and activations, uses a probabilistically interpretable threshold for accepting ICs as reliable, and requires applying ICA only three times per data set. As evidence of the method’s validity, we show that the method can perform comparably to more time intensive bootstrap resampling and depends in a reasonable manner on the amount of training data. Finally, using the method we illustrate the importance of checking the reliability of ICs by demonstrating that IC reliability is dramatically increased by removing the mean EEG at each channel for each epoch of data rather than the mean EEG in a prestimulus baseline. PMID:19162199

  13. Report on Wind Turbine Subsystem Reliability - A Survey of Various Databases (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, S.

    2013-07-01

    Wind industry has been challenged by premature subsystem/component failures. Various reliability data collection efforts have demonstrated their values in supporting wind turbine reliability and availability research & development and industrial activities. However, most information on these data collection efforts are scattered and not in a centralized place. With the objective of getting updated reliability statistics of wind turbines and/or subsystems so as to benefit future wind reliability and availability activities, this report is put together based on a survey of various reliability databases that are accessible directly or indirectly by NREL. For each database, whenever feasible, a brief description summarizingmore » database population, life span, and data collected is given along with its features & status. Then selective results deemed beneficial to the industry and generated based on the database are highlighted. This report concludes with several observations obtained throughout the survey and several reliability data collection opportunities in the future.« less

  14. Accuracy and reliability of the Pfeffer Questionnaire for the Brazilian elderly population

    PubMed Central

    Dutra, Marina Carneiro; Ribeiro, Raynan dos Santos; Pinheiro, Sarah Brandão; de Melo, Gislane Ferreira; Carvalho, Gustavo de Azevedo

    2015-01-01

    The aging population calls for instruments to assess functional and cognitive impairment in the elderly, aiming to prevent conditions that affect functional abilities. Objective To verify the accuracy and reliability of the Pfeffer (FAQ) scale for the Brazilian elderly population and to evaluate the reliability and reproducibility of the translated version of the Pfeffer Questionnaire. Methods The Brazilian version of the FAQ was applied to 110 elderly divided into two groups. Both groups were assessed by two blinded investigators at baseline and again after 15 days. In order to verify the accuracy and reliability of the instrument, sensitivity and specificity measurements for the presence or absence of functional and cognitive decline were calculated for various cut-off points and the ROC curve. Intra and inter-examiner reliability were assessed using the Interclass Correlation Coefficient (ICC) and Bland-Altman plots. Results For the occurrence of cognitive decline, the ROC curve yielded an area under the curve of 0.909 (95%CI of 0.845 to 0.972), sensitivity of 75.68% (95%CI of 93.52% to 100%) and specificity of 97.26%. For the occurrence of functional decline, the ROC curve yielded an area under the curve of 0.851 (95%CI of 64.52% to 87.33%) and specificity of 80.36% (95%CI of 69.95% to 90.76%). The ICC was excellent, with all values exceeding 0.75. On the Bland-Altman plot, intra-examiner agreement was good, with p>0.05consistently close to 0. A systematic difference was found for inter-examiner agreement. Conclusion The Pfeffer Questionnaire is applicable in the Brazilian elderly population and showed reliability and reproducibility compared to the original test. PMID:29213959

  15. General motor function assessment scale--reliability of a Norwegian version.

    PubMed

    Langhammer, Birgitta; Lindmark, Birgitta

    2014-01-01

    The General Motor Function assessment scale (GMF) measures activity-related dependence, pain and insecurity among older people in frail health. The aim of the present study was to translate the GMF into a Norwegian version (N-GMF) and establish its reliability and clinical feasibility. The procedure used in translating the GMF was a forward and backward process, testing a convenience sample of 30 frail elderly people with it. The intra-rater reliability tests were performed by three physiotherapists, and the inter-reliability test was done by the same three plus nine independent colleagues. The statistical analyses were performed with a pairwise analysis for intra- and inter-rater reliability, using Cronbach's α, Percentage Agreement (PA), Svensson's rank transformable method and Cohen's κ. The Cronbach's α coefficients for the different subscales of N-GMF were 0.68 for Dependency, 0.73 for Pain and 0.75 for Insecurity. Intra-rater reliability: The variation in the PA for the total score was 40-70% in Dependence, 30-40% in Pain and 30-60% in Insecurity. The Relative Rank Variant (RV) indicated a modest individual bias and an augmented rank-order agreement coefficient ra of 0.96, 0.96 and 0.99, respectively. The variation in the κ statistics was 0.27-0.62 for Dependence, 0.17-0.35 for Pain and 0.13-0.47 for Insecurity. Inter-rater reliability: The PA between different testers in Dependence, Pain and Insecurity was 74%, 89% and 74%, respectively. The augmented rank-order agreement coefficients were: for Dependence r(a) = 0.97; for Pain, r(a) = 0.99; and for Insecurity, r(a) = 0.99. The N-GMF is a fairly reliable instrument for use with frail elderly people, with intra-rater and inter-rater reliability moderate in Dependence and slight to fair in Pain and Insecurity. The clinical usefulness was stressed in regard to its main focus, the frail elderly, and for communication within a multidisciplinary team. Implications for Rehabilitation The Norwegian

  16. Identification of saline soils with multi-year remote sensing of crop yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobell, D; Ortiz-Monasterio, I; Gurrola, F C

    2006-10-17

    Soil salinity is an important constraint to agricultural sustainability, but accurate information on its variation across agricultural regions or its impact on regional crop productivity remains sparse. We evaluated the relationships between remotely sensed wheat yields and salinity in an irrigation district in the Colorado River Delta Region. The goals of this study were to (1) document the relative importance of salinity as a constraint to regional wheat production and (2) develop techniques to accurately identify saline fields. Estimates of wheat yield from six years of Landsat data agreed well with ground-based records on individual fields (R{sup 2} = 0.65).more » Salinity measurements on 122 randomly selected fields revealed that average 0-60 cm salinity levels > 4 dS m{sup -1} reduced wheat yields, but the relative scarcity of such fields resulted in less than 1% regional yield loss attributable to salinity. Moreover, low yield was not a reliable indicator of high salinity, because many other factors contributed to yield variability in individual years. However, temporal analysis of yield images showed a significant fraction of fields exhibited consistently low yields over the six year period. A subsequent survey of 60 additional fields, half of which were consistently low yielding, revealed that this targeted subset had significantly higher salinity at 30-60 cm depth than the control group (p = 0.02). These results suggest that high subsurface salinity is associated with consistently low yields in this region, and that multi-year yield maps derived from remote sensing therefore provide an opportunity to map salinity across agricultural regions.« less

  17. Use of remote sensing, geographic information systems, and spatial statistics to assess spatio-temporal population dynamics of Heterodera glycines and soybean yield quantity and quality

    NASA Astrophysics Data System (ADS)

    Moreira, Antonio Jose De Araujo

    Soybean, Glycine max (L.) Merr., is an important source of oil and protein worldwide, and soybean cyst nematode (SCN), Heterodera glycines, is among the most important yield-limiting factors in soybean production worldwide. Early detection of SCN is difficult because soybean plants infected by SCN often do not exhibit visible symptoms. It was hypothesized, however, that reflectance data obtained by remote sensing from soybean canopies may be used to detect plant stress caused by SCN infection. Moreover, reflectance measurements may be related to soybean growth and yield. Two field experiments were conducted from 2000 to 2002 to study the relationships among reflectance data, quantity and quality of soybean yield, and SCN population densities. The best relationships between reflectance and the quantity of soybean grain yield occurred when reflectance data were obtained late August to early September. Similarly, reflectance was best related to seed oil and seed protein content and seed size when measured during late August/early September. Grain quality-reflectance relationships varied spatially and temporally. Reflectance measured early or late in the season had the best relationships with SCN population densities measured at planting. Soil properties likely affected reflectance measurements obtained at the beginning of the season and somehow may have been related to SCN population densities at planting. Reflectance data obtained at the end of the growing season likely was affected by early senescence of SCN-infected soybeans. Spatio-temporal aspects of SCN population densities in both experiments were assessed using spatial statistics and regression analyses. In the 2000 and 2001 growing seasons, spring-to-fall changes in SCN population densities were best related to SCN population densities at planting for both experiments. However, within-season changes in SCN population densities were best related to SCN population densities at harvest for both experiments in

  18. Interrater and intrarater reliability of FDI criteria applied to photographs of posterior tooth-colored restorations.

    PubMed

    Kim, Dohyun; Ahn, So-Yeon; Kim, Junyoung; Park, Sung-Ho

    2017-07-01

    Since 2007, the FDI World Dental Federation (FDI) criteria have been used for the clinical evaluation of dental restorations. However, the reliability of the FDI criteria has not been sufficiently addressed. The purpose of this study was to assess and compare the interrater and intrarater reliability of the FDI criteria by evaluating posterior tooth-colored restorations photographically. A total of 160 clinical photographs of posterior tooth-colored restorations were evaluated independently by 5 raters with 9 of the FDI criteria suitable for photographic evaluation. The raters recorded the score of each restoration by using 5 grades, and the score was dichotomized into the clinical evaluation scores. After 1 month, 2 of the raters reevaluated the same set of 160 photographs in random order. To estimate the interrater reliability among the 5 raters, the proportion of agreement was calculated, and the Fleiss multirater kappa statistic was used. For the intrarater reliability, the proportion of agreement was calculated, and the Cohen standard kappa statistic was used for each of the 2 raters. The interrater proportion of agreement was 0.41 to 0.57, and the kappa value was 0.09 to 0.39. Overall, the intrarater reliability was higher than the interrater reliability, and rater 1 demonstrated higher intrarater reliability than rater 2. The proportion of agreement and kappa values increased when the 5 scores were dichotomized. The reliability was relatively lower for the esthetic properties compared with the functional or biological properties. Within the limitations of this study, the FDI criteria presented slight to fair interrater reliability and fair to excellent intrarater reliability in the photographic evaluation of posterior tooth-colored restorations. The reliability was improved by simplifying the evaluation scores. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  19. Comparing Interrater reliability between eye examination and eye self-examination 1

    PubMed Central

    de Lima, Maria Alzete; Pagliuca, Lorita Marlena Freitag; do Nascimento, Jennara Cândido; Caetano, Joselany Áfio

    2017-01-01

    Resume Objective: to compare Interrater reliability concerning two eye assessment methods. Method: quasi-experimental study conducted with 324 college students including eye self-examination and eye assessment performed by the researchers in a public university. Kappa coefficient was used to verify agreement. Results: reliability coefficients between Interraters ranged from 0.85 to 0.95, with statistical significance at 0.05. The exams to check for near acuity and peripheral vision presented a reasonable kappa >0.2. The remaining coefficients were higher, ranging from very to totally reliable. Conclusion: comparatively, the results of both methods were similar. The virtual manual on eye self-examination can be used to screen for eye conditions. PMID:29069269

  20. Health measurement using the ICF: Test-retest reliability study of ICF codes and qualifiers in geriatric care

    PubMed Central

    Okochi, Jiro; Utsunomiya, Sakiko; Takahashi, Tai

    2005-01-01

    Background The International Classification of Functioning, Disability and Health (ICF) was published by the World Health Organization (WHO) to standardize descriptions of health and disability. Little is known about the reliability and clinical relevance of measurements using the ICF and its qualifiers. This study examines the test-retest reliability of ICF codes, and the rate of immeasurability in long-term care settings of the elderly to evaluate the clinical applicability of the ICF and its qualifiers, and the ICF checklist. Methods Reliability of 85 body function (BF) items and 152 activity and participation (AP) items of the ICF was studied using a test-retest procedure with a sample of 742 elderly persons from 59 institutional and at home care service centers. Test-retest reliability was estimated using the weighted kappa statistic. The clinical relevance of the ICF was estimated by calculating immeasurability rate. The effect of the measurement settings and evaluators' experience was analyzed by stratification of these variables. The properties of each item were evaluated using both the kappa statistic and immeasurability rate to assess the clinical applicability of WHO's ICF checklist in the elderly care setting. Results The median of the weighted kappa statistics of 85 BF and 152 AP items were 0.46 and 0.55 respectively. The reproducibility statistics improved when the measurements were performed by experienced evaluators. Some chapters such as genitourinary and reproductive functions in the BF domain and major life area in the AP domain contained more items with lower test-retest reliability measures and rated as immeasurable than in the other chapters. Some items in the ICF checklist were rated as unreliable and immeasurable. Conclusion The reliability of the ICF codes when measured with the current ICF qualifiers is relatively low. The result in increase in reliability according to evaluators' experience suggests proper education will have positive

  1. Integrated model for predicting rice yield with climate change

    NASA Astrophysics Data System (ADS)

    Park, Jin-Ki; Das, Amrita; Park, Jong-Hwa

    2018-04-01

    Rice is the chief agricultural product and one of the primary food source. For this reason, it is of pivotal importance for worldwide economy and development. Therefore, in a decision-support-system both for the farmers and in the planning and management of the country's economy, forecasting yield is vital. However, crop yield, which is a dependent of the soil-bio-atmospheric system, is difficult to represent in statistical language. This paper describes a novel approach for predicting rice yield using artificial neural network, spatial interpolation, remote sensing and GIS methods. Herein, the variation in the yield is attributed to climatic parameters and crop health, and the normalized difference vegetation index from MODIS is used as an indicator of plant health and growth. Due importance was given to scaling up the input parameters using spatial interpolation and GIS and minimising the sources of error in every step of the modelling. The low percentage error (2.91) and high correlation (0.76) signifies the robust performance of the proposed model. This simple but effective approach is then used to estimate the influence of climate change on South Korean rice production. As proposed in the RCP8.5 scenario, an upswing in temperature may increase the rice yield throughout South Korea.

  2. Assessing Attitudes towards Statistics among Medical Students: Psychometric Properties of the Serbian Version of the Survey of Attitudes Towards Statistics (SATS)

    PubMed Central

    Stanisavljevic, Dejana; Trajkovic, Goran; Marinkovic, Jelena; Bukumiric, Zoran; Cirkovic, Andja; Milic, Natasa

    2014-01-01

    Background Medical statistics has become important and relevant for future doctors, enabling them to practice evidence based medicine. Recent studies report that students’ attitudes towards statistics play an important role in their statistics achievements. The aim of the study was to test the psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS) in order to acquire a valid instrument to measure attitudes inside the Serbian educational context. Methods The validation study was performed on a cohort of 417 medical students who were enrolled in an obligatory introductory statistics course. The SATS adaptation was based on an internationally accepted methodology for translation and cultural adaptation. Psychometric properties of the Serbian version of the SATS were analyzed through the examination of factorial structure and internal consistency. Results Most medical students held positive attitudes towards statistics. The average total SATS score was above neutral (4.3±0.8), and varied from 1.9 to 6.2. Confirmatory factor analysis validated the six-factor structure of the questionnaire (Affect, Cognitive Competence, Value, Difficulty, Interest and Effort). Values for fit indices TLI (0.940) and CFI (0.961) were above the cut-off of ≥0.90. The RMSEA value of 0.064 (0.051–0.078) was below the suggested value of ≤0.08. Cronbach’s alpha of the entire scale was 0.90, indicating scale reliability. In a multivariate regression model, self-rating of ability in mathematics and current grade point average were significantly associated with the total SATS score after adjusting for age and gender. Conclusion Present study provided the evidence for the appropriate metric properties of the Serbian version of SATS. Confirmatory factor analysis validated the six-factor structure of the scale. The SATS might be reliable and a valid instrument for identifying medical students’ attitudes towards statistics in the Serbian

  3. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  4. Small sample estimation of the reliability function for technical products

    NASA Astrophysics Data System (ADS)

    Lyamets, L. L.; Yakimenko, I. V.; Kanishchev, O. A.; Bliznyuk, O. A.

    2017-12-01

    It is demonstrated that, in the absence of big statistic samples obtained as a result of testing complex technical products for failure, statistic estimation of the reliability function of initial elements can be made by the moments method. A formal description of the moments method is given and its advantages in the analysis of small censored samples are discussed. A modified algorithm is proposed for the implementation of the moments method with the use of only the moments at which the failures of initial elements occur.

  5. The Reliability of Neurons

    PubMed Central

    Bullock, Theodore Holmes

    1970-01-01

    The prevalent probabilistic view is virtually untestable; it remains a plausible belief. The cases usually cited can not be taken as evidence for it. Several grounds for this conclusion are developed. Three issues are distinguished in an attempt to clarify a murky debate: (a) the utility of probabilistic methods in data reduction, (b) the value of models that assume indeterminacy, and (c) the validity of the inference that the nervous system is largely indeterministic at the neuronal level. No exception is taken to the first two; the second is a private heuristic question. The third is the issue to which the assertion in the first two sentences is addressed. Of the two kinds of uncertainty, statistical mechanical (= practical unpredictability) as in a gas, and Heisenbergian indeterminancy, the first certainly exists, the second is moot at the neuronal level. It would contribute to discussion to recognize that neurons perform with a degree of reliability. Although unreliability is difficult to establish, to say nothing of measure, evidence that some neurons have a high degree of reliability, in both connections and activity is increasing greatly. An example is given from sternarchine electric fish. PMID:5462670

  6. Establishing Inter- and Intrarater Reliability for High-Stakes Testing Using Simulation.

    PubMed

    Kardong-Edgren, Suzan; Oermann, Marilyn H; Rizzolo, Mary Anne; Odom-Maryon, Tamara

    This article reports one method to develop a standardized training method to establish the inter- and intrarater reliability of a group of raters for high-stakes testing. Simulation is used increasingly for high-stakes testing, but without research into the development of inter- and intrarater reliability for raters. Eleven raters were trained using a standardized methodology. Raters scored 28 student videos over a six-week period. Raters then rescored all videos over a two-day period to establish both intra- and interrater reliability. One rater demonstrated poor intrarater reliability; a second rater failed all students. Kappa statistics improved from the moderate to substantial agreement range with the exclusion of the two outlier raters' scores. There may be faculty who, for different reasons, should not be included in high-stakes testing evaluations. All faculty are content experts, but not all are expert evaluators.

  7. Statistics of contractive cracking patterns. [frozen soil-water rheology

    NASA Technical Reports Server (NTRS)

    Noever, David A.

    1991-01-01

    The statistics of convective soil patterns are analyzed using statistical crystallography. An underlying hierarchy of order is found to span four orders of magnitude in characteristic pattern length. Strict mathematical requirements determine the two-dimensional (2D) topology, such that random partitioning of space yields a predictable statistical geometry for polygons. For all lengths, Aboav's and Lewis's laws are verified; this result is consistent both with the need to fill 2D space and most significantly with energy carried not by the patterns' interior, but by the boundaries. Together, this suggests a common mechanism of formation for both micro- and macro-freezing patterns.

  8. The revised Generalized Expectancy for Success Scale: a validity and reliability study.

    PubMed

    Hale, W D; Fiedler, L R; Cochran, C D

    1992-07-01

    The Generalized Expectancy for Success Scale (GESS; Fibel & Hale, 1978) was revised and assessed for reliability and validity. The revised version was administered to 199 college students along with other conceptually related measures, including the Rosenberg Self-Esteem Scale, the Life Orientation Test, and Rotter's Internal-External Locus of Control Scale. One subsample of students also completed the Eysenck Personality Inventory, while another subsample performed a criterion-related task that involved risk taking. Item analysis yielded 25 items with correlations of .45 or higher with the total score. Results indicated high internal consistency and test-retest reliability.

  9. Online incidental statistical learning of audiovisual word sequences in adults: a registered report.

    PubMed

    Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy

    2018-02-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r  = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.

  10. Online incidental statistical learning of audiovisual word sequences in adults: a registered report

    PubMed Central

    Duta, Mihaela; Thompson, Paul

    2018-01-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory–picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test–retest reliability (r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process. PMID:29515876

  11. Ion induced electron emission statistics under Agm- cluster bombardment of Ag

    NASA Astrophysics Data System (ADS)

    Breuers, A.; Penning, R.; Wucher, A.

    2018-05-01

    The electron emission from a polycrystalline silver surface under bombardment with Agm- cluster ions (m = 1, 2, 3) is investigated in terms of ion induced kinetic excitation. The electron yield γ is determined directly by a current measurement method on the one hand and implicitly by the analysis of the electron emission statistics on the other hand. Successful measurements of the electron emission spectra ensure a deeper understanding of the ion induced kinetic electron emission process, with particular emphasis on the effect of the projectile cluster size to the yield as well as to emission statistics. The results allow a quantitative comparison to computer simulations performed for silver atoms and clusters impinging onto a silver surface.

  12. Reliability of Leg and Vertical Stiffness During High Speed Treadmill Running.

    PubMed

    Pappas, Panagiotis; Dallas, Giorgos; Paradisis, Giorgos

    2017-04-01

    In research, the accurate and reliable measurement of leg and vertical stiffness could contribute to valid interpretations. The current study aimed at determining the intraparticipant variability (ie, intraday and interday reliabilities) of leg and vertical stiffness, as well as related parameters, during high speed treadmill running, using the "sine-wave" method. Thirty-one males ran on a treadmill at 6.67 m∙s -1 , and the contact and flight times were measured. To determine the intraday reliability, three 10-s running bouts with 10-min recovery were performed. In addition, to examine the interday reliability, three 10-s running bouts on 3 separate days with 48-h interbout intervals were performed. The reliability statistics included repeated-measure analysis of variance, average intertrial correlations, intraclass correlation coefficients (ICCs), Cronbach's α reliability coefficient, and the coefficient of variation (CV%). Both intraday and interday reliabilities were high for leg and vertical stiffness (ICC > 0.939 and CV < 4.3%), as well as related variables (ICC > 0.934 and CV < 3.9%). It was thus inferred that the measurements of leg and vertical stiffness, as well as the related parameters obtained using the "sine-wave" method during treadmill running at 6.67 m∙s -1 , were highly reliable, both within and across days.

  13. Reliability of the ADI-R for the Single Case-Part II: Clinical versus Statistical Significance

    ERIC Educational Resources Information Center

    Cicchetti, Domenic V.; Lord, Catherine; Koenig, Kathy; Klin, Ami; Volkmar, Fred R.

    2014-01-01

    In an earlier investigation, the authors assessed the reliability of the ADI-R when multiple clinicians evaluated a single case, here a female 3 year old toddler suspected of having an autism spectrum disorder (Cicchetti et al. in "J Autism Dev Disord" 38:764-770, 2008). Applying the clinical criteria of Cicchetti and Sparrow ("Am J…

  14. [The reliability of a questionnaire regarding Colombian children's physical activity].

    PubMed

    Herazo-Beltrán, Aliz Y; Domínguez-Anaya, Regina

    2012-10-01

    Reporting the Physical Activity Questionnaire for school children's (PAQ-C) test-retest reliability and internal consistency. This was a descriptive study of 100 school-aged children aged 9 to 11 years old attending a school in Cartagena, Colombia. The sample was randomly selected. The PAQ-C was given twice, one week apart, after the informed consent forms had been signing by the children's parents and school officials. Cronbach's alpha coefficient of reliability was used for assessing internal consistency and an intra-class correlation coefficient for test-retest reliability SPSS (version 17.0) was used for statistical analysis. The questionnaire scored 0.73 internal consistencies during the first measurement and 0.78 on the second; intra-class correlation coefficient was 0.60. There were differences between boys and girls regarding both measurements. The PAQ-C had acceptable internal consistency and test-retest reliability, thereby making it useful for measuring children's self-reported physical activity and a valuable tool for population studies in Colombia.

  15. Spectrally-Based Assessment of Crop Seasonal Performance and Yield

    NASA Astrophysics Data System (ADS)

    Kancheva, Rumiana; Borisova, Denitsa; Georgiev, Georgy

    The rapid advances of space technologies concern almost all scientific areas from aeronautics to medicine, and a wide range of application fields from communications to crop yield predictions. Agricultural monitoring is among the priorities of remote sensing observations for getting timely information on crop development. Monitoring agricultural fields during the growing season plays an important role in crop health assessment and stress detection provided that reliable data is obtained. Successfully spreading is the implementation of hyperspectral data to precision farming associated with plant growth and phenology monitoring, physiological state assessment, and yield prediction. In this paper, we investigated various spectral-biophysical relationships derived from in-situ reflectance measurements. The performance of spectral data for the assessment of agricultural crops condition and yield prediction was examined. The approach comprisesd development of regression models between plant spectral and state-indicative variables such as biomass, vegetation cover fraction, leaf area index, etc., and development of yield forecasting models from single-date (growth stage) and multitemporal (seasonal) reflectance data. Verification of spectral predictions was performed through comparison with estimations from biophysical relationships between crop growth variables. The study was carried out for spring barley and winter wheat. Visible and near-infrared reflectance data was acquired through the whole growing season accompanied by detailed datasets on plant phenology and canopy structural and biochemical attributes. Empirical relationships were derived relating crop agronomic variables and yield to various spectral predictors. The study findings were tested using airborne remote sensing inputs. A good correspondence was found between predicted and actual (ground-truth) estimates

  16. A new statistical distance scale for planetary nebulae

    NASA Astrophysics Data System (ADS)

    Ali, Alaa; Ismail, H. A.; Alsolami, Z.

    2015-05-01

    In the first part of the present article we discuss the consistency among different individual distance methods of Galactic planetary nebulae, while in the second part we develop a new statistical distance scale based on a calibrating sample of well determined distances. A set composed of 315 planetary nebulae with individual distances are extracted from the literature. Inspecting the data set indicates that the accuracy of distances is varying among different individual methods and also among different sources where the same individual method was applied. Therefore, we derive a reliable weighted mean distance for each object by considering the influence of the distance error and the weight of each individual method. The results reveal that the discussed individual methods are consistent with each other, except the gravity method that produces higher distances compared to other individual methods. From the initial data set, we construct a standard calibrating sample consists of 82 objects. This sample is restricted only to the objects with distances determined from at least two different individual methods, except few objects with trusted distances determined from the trigonometric, spectroscopic, and cluster membership methods. In addition to the well determined distances for this sample, it shows a lot of advantages over that used in the prior distance scales. This sample is used to recalibrate the mass-radius and radio surface brightness temperature-radius relationships. An average error of ˜30 % is estimated for the new distance scale. The newly distance scale is compared with the most widely used statistical scales in literature, where the results show that it is roughly similar to the majority of them within ˜±20 % difference. Furthermore, the new scale yields a weighted mean distance to the Galactic center of 7.6±1.35 kpc, which in good agreement with the very recent measure of Malkin 2013.

  17. Simulating and Predicting Cereal Crop Yields in Ethiopia: Model Calibration and Verification

    NASA Astrophysics Data System (ADS)

    Yang, M.; Wang, G.; Ahmed, K. F.; Eggen, M.; Adugna, B.; Anagnostou, E. N.

    2017-12-01

    Agriculture in developing countries are extremely vulnerable to climate variability and changes. In East Africa, most people live in the rural areas with outdated agriculture techniques and infrastructure. Smallholder agriculture continues to play a key role in this area, and the rate of irrigation is among the lowest of the world. As a result, seasonal and inter-annual weather patterns play an important role in the spatiotemporal variability of crop yields. This study investigates how various climate variables (e.g., temperature, precipitation, sunshine) and agricultural practice (e.g., fertilization, irrigation, planting date) influence cereal crop yields using a process-based model (DSSAT) and statistical analysis, and focuses on the Blue Nile Basin of Ethiopia. The DSSAT model is driven with meteorological forcing from the ECMWF's latest reanalysis product that cover the past 35 years; the statistical model will be developed by linking the same meteorological reanalysis data with harvest data at the woreda level from the Ethiopian national dataset. Results from this study will set the stage for the development of a seasonal prediction system for weather and crop yields in Ethiopia, which will serve multiple sectors in coping with the agricultural impact of climate variability.

  18. Initial yield to depth relation for water wells drilled into crystalline bedrock - Pinardville quadrangle, New Hampshire

    USGS Publications Warehouse

    Drew, L.J.; Schuenemeyer, J.H.; Amstrong, T.R.; Sutphin, D.M.

    2001-01-01

    A model is proposed to explain the statistical relations between the mean initial water well yields from eight time increments from 1984 to 1998 for wells drilled into the crystalline bedrock aquifer system in the Pinardville area of southern New Hampshire and the type of bedrock, mean well depth, and mean well elevation. Statistical analyses show that the mean total yield of drilling increments is positively correlated with mean total well depth and mean well elevation. In addition, the mean total well yield varies with rock type from a minimum of 46.9 L/min (12.4 gpm) in the Damon Pond granite to a maximum of 74.5 L/min (19.7 gpm) in the Permian pegmatite and granite unit. Across the eight drilling increments that comprise 211 wells each, the percentages of very low-yield wells (1.9 L/min [0.5 gpm] or less) and high-yield wells (151.4 L/min [40 gpm] or more) increased, and those of intermediate-yield wells decreased. As housing development progressed during the 1984 to 1998 interval, the mean depth of the wells and their elevations increased, and the mix of percentages of the bedrock types drilled changed markedly. The proposed model uses a feed-forward mechanism to explain the interaction between the increasing mean elevation, mean well depth, and percentages of very low-yielding wells and the mean well yield. The increasing percentages of very low-yielding wells through time and the economics of the housing market may control the system that forces the mean well depths, percentages of high-yield wells, and mean well yields to increase. The reason for the increasing percentages of very low-yield wells is uncertain, but the explanation is believed to involve the complex structural geology and tectonic history of the Pinardville quadrangle.

  19. 76 FR 13018 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... statistical surveys that yield quantitative results that can be generalized to the population of study. This... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. Total Burden Estimate for the...

  20. 76 FR 12072 - Guidance for Agency Information Collection Activities: Proposed Collection; Comment Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-04

    ... not statistical surveys that yield quantitative results that can be generalized to the population of... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. No comments were received in response...

  1. Impact of High-Reliability Education on Adverse Event Reporting by Registered Nurses.

    PubMed

    McFarland, Diane M; Doucette, Jeffrey N

    Adverse event reporting is one strategy to identify risks and improve patient safety, but, historically, adverse events are underreported by registered nurses (RNs) because of fear of retribution and blame. A program was provided on high reliability to examine whether education would impact RNs' willingness to report adverse events. Although the findings were not statistically significant, they demonstrated a positive impact on adverse event reporting and support the need to create a culture of high reliability.

  2. The influence of various test plans on mission reliability. [for Shuttle Spacelab payloads

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    Methods have been developed for the evaluation of cost effective vibroacoustic test plans for Shuttle Spacelab payloads. The shock and vibration environments of components have been statistically represented, and statistical decision theory has been used to evaluate the cost effectiveness of five basic test plans with structural test options for two of the plans. Component, subassembly, and payload testing have been performed for each plan along with calculations of optimum test levels and expected costs. The tests have been ranked according to both minimizing expected project costs and vibroacoustic reliability. It was found that optimum costs may vary up to $6 million with the lowest plan eliminating component testing and maintaining flight vibration reliability via subassembly tests at high acoustic levels.

  3. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  4. Evaluating Random Error in Clinician-Administered Surveys: Theoretical Considerations and Clinical Applications of Interobserver Reliability and Agreement.

    PubMed

    Bennett, Rebecca J; Taljaard, Dunay S; Olaithe, Michelle; Brennan-Jones, Chris; Eikelboom, Robert H

    2017-09-18

    The purpose of this study is to raise awareness of interobserver concordance and the differences between interobserver reliability and agreement when evaluating the responsiveness of a clinician-administered survey and, specifically, to demonstrate the clinical implications of data types (nominal/categorical, ordinal, interval, or ratio) and statistical index selection (for example, Cohen's kappa, Krippendorff's alpha, or interclass correlation). In this prospective cohort study, 3 clinical audiologists, who were masked to each other's scores, administered the Practical Hearing Aid Skills Test-Revised to 18 adult owners of hearing aids. Interobserver concordance was examined using a range of reliability and agreement statistical indices. The importance of selecting statistical measures of concordance was demonstrated with a worked example, wherein the level of interobserver concordance achieved varied from "no agreement" to "almost perfect agreement" depending on data types and statistical index selected. This study demonstrates that the methodology used to evaluate survey score concordance can influence the statistical results obtained and thus affect clinical interpretations.

  5. Reliability Sampling Plans: A Review and Some New Results

    ERIC Educational Resources Information Center

    Isaic-Maniu, Alexandru; Voda, Viorel Gh.

    2009-01-01

    In this work we present a large area of aspects related to the problem of sampling inspection in the case of reliability. First we discuss the actual status of this domain, mentioning the newest approaches (from a technical view point) such as HALT and HASS and the statistical perspective. After a brief description of the general procedure in…

  6. Confirmatory Factor Analysis of the Malay Version of the Confusion, Hubbub and Order Scale (CHAOS-6) among Myocardial Infarction Survivors in a Malaysian Cardiac Healthcare Facility

    PubMed Central

    Ganasegeran, Kurubaran; Selvaraj, Kamaraj; Rashid, Abdul

    2017-01-01

    Background The six item Confusion, Hubbub and Order Scale (CHAOS-6) has been validated as a reliable tool to measure levels of household disorder. We aimed to investigate the goodness of fit and reliability of a new Malay version of the CHAOS-6. Methods The original English version of the CHAOS-6 underwent forward-backward translation into the Malay language. The finalised Malay version was administered to 105 myocardial infarction survivors in a Malaysian cardiac health facility. We performed confirmatory factor analyses (CFAs) using structural equation modelling. A path diagram and fit statistics were yielded to determine the Malay version’s validity. Composite reliability was tested to determine the scale’s reliability. Results All 105 myocardial infarction survivors participated in the study. The CFA yielded a six-item, one-factor model with excellent fit statistics. Composite reliability for the single factor CHAOS-6 was 0.65, confirming that the scale is reliable for Malay speakers. Conclusion The Malay version of the CHAOS-6 was reliable and showed the best fit statistics for our study sample. We thus offer a simple, brief, validated, reliable and novel instrument to measure chaos, the Skala Kecelaruan, Keriuhan & Tertib Terubahsuai (CHAOS-6), for the Malaysian population. PMID:28951688

  7. An image based method for crop yield prediction using remotely sensed and crop canopy data: the case of Paphos district, western Cyprus

    NASA Astrophysics Data System (ADS)

    Papadavid, G.; Hadjimitsis, D.

    2014-08-01

    Remote sensing techniques development have provided the opportunity for optimizing yields in the agricultural procedure and moreover to predict the forthcoming yield. Yield prediction plays a vital role in Agricultural Policy and provides useful data to policy makers. In this context, crop and soil parameters along with NDVI index which are valuable sources of information have been elaborated statistically to test if a) Durum wheat yield can be predicted and b) when is the actual time-window to predict the yield in the district of Paphos, where Durum wheat is the basic cultivation and supports the rural economy of the area. 15 plots cultivated with Durum wheat from the Agricultural Research Institute of Cyprus for research purposes, in the area of interest, have been under observation for three years to derive the necessary data. Statistical and remote sensing techniques were then applied to derive and map a model that can predict yield of Durum wheat in this area. Indeed the semi-empirical model developed for this purpose, with very high correlation coefficient R2=0.886, has shown in practice that can predict yields very good. Students T test has revealed that predicted values and real values of yield have no statistically significant difference. The developed model can and will be further elaborated with more parameters and applied for other crops in the near future.

  8. Selecting statistical model and optimum maintenance policy: a case study of hydraulic pump.

    PubMed

    Ruhi, S; Karim, M R

    2016-01-01

    Proper maintenance policy can play a vital role for effective investigation of product reliability. Every engineered object such as product, plant or infrastructure needs preventive and corrective maintenance. In this paper we look at a real case study. It deals with the maintenance of hydraulic pumps used in excavators by a mining company. We obtain the data that the owner had collected and carry out an analysis and building models for pump failures. The data consist of both failure and censored lifetimes of the hydraulic pump. Different competitive mixture models are applied to analyze a set of maintenance data of a hydraulic pump. Various characteristics of the mixture models, such as the cumulative distribution function, reliability function, mean time to failure, etc. are estimated to assess the reliability of the pump. Akaike Information Criterion, adjusted Anderson-Darling test statistic, Kolmogrov-Smirnov test statistic and root mean square error are considered to select the suitable models among a set of competitive models. The maximum likelihood estimation method via the EM algorithm is applied mainly for estimating the parameters of the models and reliability related quantities. In this study, it is found that a threefold mixture model (Weibull-Normal-Exponential) fits well for the hydraulic pump failures data set. This paper also illustrates how a suitable statistical model can be applied to estimate the optimum maintenance period at a minimum cost of a hydraulic pump.

  9. How Reliable Are Students' Evaluations of Teaching Quality? A Variance Components Approach

    ERIC Educational Resources Information Center

    Feistauer, Daniela; Richter, Tobias

    2017-01-01

    The inter-rater reliability of university students' evaluations of teaching quality was examined with cross-classified multilevel models. Students (N = 480) evaluated lectures and seminars over three years with a standardised evaluation questionnaire, yielding 4224 data points. The total variance of these student evaluations was separated into the…

  10. Space flight risk data collection and analysis project: Risk and reliability database

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The focus of the NASA 'Space Flight Risk Data Collection and Analysis' project was to acquire and evaluate space flight data with the express purpose of establishing a database containing measurements of specific risk assessment - reliability - availability - maintainability - supportability (RRAMS) parameters. The developed comprehensive RRAMS database will support the performance of future NASA and aerospace industry risk and reliability studies. One of the primary goals has been to acquire unprocessed information relating to the reliability and availability of launch vehicles and the subsystems and components thereof from the 45th Space Wing (formerly Eastern Space and Missile Command -ESMC) at Patrick Air Force Base. After evaluating and analyzing this information, it was encoded in terms of parameters pertinent to ascertaining reliability and availability statistics, and then assembled into an appropriate database structure.

  11. Yield gaps and yield relationships in US soybean production systems

    USDA-ARS?s Scientific Manuscript database

    The magnitude of yield gaps (YG) (potential yield – farmer yield) provides some indication of the prospects for increasing crop yield to meet the food demands of future populations. Quantile regression analysis was applied to county soybean [Glycine max (L.) Merrill] yields (1971 – 2011) from Kentuc...

  12. Assessing the reliability and validity of the Chinese Sexual Assault Symptom Scale (C-SASS): scale development and validation.

    PubMed

    Wang, Chang-Hwai; Lee, Jin-Chuan; Yuan, Yu-Hsi

    2014-01-01

    The purpose of this research is to establish and verify the psychometric and structural properties of the self-report Chinese Sexual Assault Symptom Scale (C-SASS) to assess the trauma experienced by Chinese victims of sexual assault. An earlier version of the C-SASS was constructed using a modified list of the same trauma symptoms administered to an American sample and used to develop and validate the Sexual Assault Symptom Scale II (SASS II). The rationale of this study is to revise the earlier version of the C-SASS, using a larger and more representative sample and more robust statistical analysis than in earlier research, to permit a more thorough examination of the instrument and further confirm the dimensions of sexual assault trauma in Chinese victims of rape. In this study, a sample of 418 victims from northern Taiwan was collected to confirm the reliability and validity of the C-SASS. Exploratory factor analysis yielded five common factors: Safety Fears, Self-Blame, Health Fears, Anger and Emotional Lability, and Fears About the Criminal Justice System. Further tests of the validity and composite reliability of the C-SASS were provided by the structural equation modeling (SEM). The results indicated that the C-SASS was a brief, valid, and reliable instrument for assessing sexual assault trauma among Chinese victims in Taiwan. The scale can be used to evaluate victims in sexual assault treatment centers around Taiwan, as well as to capture the characteristics of sexual assault trauma among Chinese victims.

  13. Properties of different selection signature statistics and a new strategy for combining them.

    PubMed

    Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H

    2015-11-01

    Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.

  14. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  15. Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration

    PubMed Central

    Luan, Lan; Wei, Xiaoling; Zhao, Zhengtuo; Siegel, Jennifer J.; Potnis, Ojas; Tuppen, Catherine A; Lin, Shengqing; Kazmi, Shams; Fowler, Robert A.; Holloway, Stewart; Dunn, Andrew K.; Chitwood, Raymond A.; Xie, Chong

    2017-01-01

    Implanted brain electrodes construct the only means to electrically interface with individual neurons in vivo, but their recording efficacy and biocompatibility pose limitations on scientific and clinical applications. We showed that nanoelectronic thread (NET) electrodes with subcellular dimensions, ultraflexibility, and cellular surgical footprints form reliable, glial scar–free neural integration. We demonstrated that NET electrodes reliably detected and tracked individual units for months; their impedance, noise level, single-unit recording yield, and the signal amplitude remained stable during long-term implantation. In vivo two-photon imaging and postmortem histological analysis revealed seamless, subcellular integration of NET probes with the local cellular and vasculature networks, featuring fully recovered capillaries with an intact blood-brain barrier and complete absence of chronic neuronal degradation and glial scar. PMID:28246640

  16. Equilibrium statistical-thermal models in high-energy physics

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser

    2014-05-01

    We review some recent highlights from the applications of statistical-thermal models to different experimental measurements and lattice QCD thermodynamics that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical-thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948, an almost complete recipe for the statistical-thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it, the simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analyzed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-particle systems can be studied with the help of statistical-thermal methods. The analysis of yield multiplicities in high-energy collisions gives an overwhelming evidence for the chemical equilibrium in the final state. The strange particles might be an exception, as they are suppressed at lower beam energies. However, their relative yields fulfill statistical equilibrium, as well. We review the equilibrium statistical-thermal models for particle production, fluctuations and collective flow in heavy-ion experiments. We also review their reproduction of the lattice QCD thermodynamics at vanishing and finite chemical potential. During the last decade, five conditions have been suggested to describe the universal behavior of the chemical freeze-out parameters. The higher order moments of multiplicity have been discussed. They offer deep insights about particle production and to critical fluctuations. Therefore, we use them to describe the freeze-out parameters

  17. 76 FR 55725 - Agency Information Collection Activities: Request for Comments for a New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-08

    ... statistical surveys that yield quantitative results that can be generalized to the population of study. This... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. The FHWA received no comments in...

  18. Reliability and Validity of Wisconsin Upper Respiratory Symptom Survey, Korean Version

    PubMed Central

    Yang, Su-Young; Kang, Weechang; Yeo, Yoon; Park, Yang-Chun

    2011-01-01

    Background The Wisconsin Upper Respiratory Symptom Survey (WURSS) is a self-administered questionnaire developed in the United States to evaluate the severity of the common cold and its reliability has been validated. We developed a Korean language version of this questionnaire by using a sequential forward and backward translation approach. The purpose of this study was to validate the Korean version of the Wisconsin Upper Respiratory Symptom Survey (WURSS-K) in Korean patients with common cold. Methods This multicenter prospective study enrolled 107 participants who were diagnosed with common cold and consented to participate in the study. The WURSS-K includes 1 global illness severity item, 32 symptom-based items, 10 functional quality-of-life (QOL) items, and 1 item assessing global change. The SF-8 was used as an external comparator. Results The participants were 54 women and 53 men aged 18 to 42 years. The WURSS-K showed good reliability in 10 domains, with Cronbach’s alphas ranging from 0.67 to 0.96 (mean: 0.84). Comparison of the reliability coefficients of the WURSS-K and WURSS yielded a Pearson correlation coefficient of 0.71 (P = 0.02). Validity of the WURSS-K was evaluated by comparing it with the SF-8, which yielded a Pearson correlation coefficient of −0.267 (P < 0.001). The Guyatt’s responsiveness index of the WURSS-K ranged from 0.13 to 0.46, and the correlation coefficient with the WURSS was 0.534 (P < 0.001), indicating that there was close correlation between the WURSS-K and WURSS. Conclusions The WURSS-K is a reliable, valid, and responsive disease-specific questionnaire for assessing symptoms and QOL in Korean patients with common cold. PMID:21691034

  19. Estimates Of The Orbiter RSI Thermal Protection System Thermal Reliability

    NASA Technical Reports Server (NTRS)

    Kolodziej, P.; Rasky, D. J.

    2002-01-01

    In support of the Space Shuttle Orbiter post-flight inspection, structure temperatures are recorded at selected positions on the windward, leeward, starboard and port surfaces. Statistical analysis of this flight data and a non-dimensional load interference (NDLI) method are used to estimate the thermal reliability at positions were reusable surface insulation (RSI) is installed. In this analysis, structure temperatures that exceed the design limit define the critical failure mode. At thirty-three positions the RSI thermal reliability is greater than 0.999999 for the missions studied. This is not the overall system level reliability of the thermal protection system installed on an Orbiter. The results from two Orbiters, OV-102 and OV-105, are in good agreement. The original RSI designs on the OV-102 Orbital Maneuvering System pods, which had low reliability, were significantly improved on OV-105. The NDLI method was also used to estimate thermal reliability from an assessment of TPS uncertainties that was completed shortly before the first Orbiter flight. Results fiom the flight data analysis and the pre-flight assessment agree at several positions near each other. The NDLI method is also effective for optimizing RSI designs to provide uniform thermal reliability on the acreage surface of reusable launch vehicles.

  20. Transient-evoked and distortion product otoacoustic emissions: A short-term test-retest reliability study.

    PubMed

    Keppler, Hannah; Dhooge, Ingeborg; Maes, Leen; D'haenens, Wendy; Bockstael, Annelies; Philips, Birgit; Swinnen, Freya; Vinck, Bart

    2010-02-01

    Knowledge regarding the variability of transient-evoked otoacoustic emissions (TEOAEs) and distortion product otoacoustic emissions (DPOAEs) is essential in clinical settings and improves their utility in monitoring hearing status over time. In the current study, TEOAEs and DPOAEs were measured with commercially available OAE-equipment in 56 normally-hearing ears during three sessions. Reliability was analysed for the retest measurement without probe-refitting, the immediate retest measurement with probe-refitting, and retest measurements after one hour and one week. The highest reliability was obtained in the retest measurement without probe-refitting, and decreased with increasing time-interval between measurements. For TEOAEs, the lowest reliability was seen at half-octave frequency bands 1.0 and 1.4 kHz; whereas for DPOAEs half-octave frequency band 8.0 kHz had also poor reliability. Higher primary tone level combination for DPOAEs yielded to a better reliability of DPOAE amplitudes. External environmental noise seemed to be the dominating noise source in normal-hearing subjects, decreasing the reliability of emission amplitudes especially in the low-frequency region.

  1. Examination of Anomalous World Experience: A Report on Reliability.

    PubMed

    Conerty, Joseph; Skodlar, Borut; Pienkos, Elizabeth; Zadravek, Tina; Byrom, Greg; Sass, Louis

    2017-01-01

    The EAWE (Examination of Anomalous World Experience) is a newly developed, semi-structured interview that aims to capture anomalies of subjectivity, common in schizophrenia spectrum disorders, that pertain to experiences of the lived world, including space, time, people, language, atmosphere, and certain existential attitudes. By contrast, previous empirical studies of subjective experience in schizophrenia have focused largely on disturbances in self-experience. To assess the reliability of the EAWE, including internal consistency and interrater reliability. In the course of developing the EAWE, two distinct studies were conducted, one in the United States and the other in Slovenia. Thirteen patients diagnosed with schizophrenia spectrum or mood disorders were recruited for the US study. Fifteen such patients were recruited for the Slovenian study. Two live interviewers conducted the EAWE in the US. The Slovenian interviews were completed by one live interviewer with a second rater reviewing audiorecordings of the interview. Internal consistency and interrater reliability were calculated independently for each study, utilizing Cronbach's α, Spearman's ρ, and Cohen's κ. Each study yielded high internal consistency (Cronbach's α >0.82) and high interrater reliability for total EAWE scores (ρ > 0.83; average κ values were at least 0.78 for each study, with EAWE domain-specific κ not lower than 0.73). The EAWE, containing world-oriented inquiries into anomalies in subjective experience, has adequate reliability for use in a clinical or research setting. © 2017 S. Karger AG, Basel.

  2. Use of vegetation health data for estimation of aus rice yield in bangladesh.

    PubMed

    Rahman, Atiqur; Roytman, Leonid; Krakauer, Nir Y; Nizamuddin, Mohammad; Goldberg, Mitch

    2009-01-01

    Rice is a vital staple crop for Bangladesh and surrounding countries, with interannual variation in yields depending on climatic conditions. We compared Bangladesh yield of aus rice, one of the main varieties grown, from official agricultural statistics with Vegetation Health (VH) Indices [Vegetation Condition Index (VCI), Temperature Condition Index (TCI) and Vegetation Health Index (VHI)] computed from Advanced Very High Resolution Radiometer (AVHRR) data covering a period of 15 years (1991-2005). A strong correlation was found between aus rice yield and VCI and VHI during the critical period of aus rice development that occurs during March-April (weeks 8-13 of the year), several months in advance of the rice harvest. Stepwise principal component regression (PCR) was used to construct a model to predict yield as a function of critical-period VHI. The model reduced the yield prediction error variance by 62% compared with a prediction of average yield for each year. Remote sensing is a valuable tool for estimating rice yields well in advance of harvest and at a low cost.

  3. Use of Vegetation Health Data for Estimation of Aus Rice Yield in Bangladesh

    PubMed Central

    Rahman, Atiqur; Roytman, Leonid; Krakauer, Nir Y.; Nizamuddin, Mohammad; Goldberg, Mitch

    2009-01-01

    Rice is a vital staple crop for Bangladesh and surrounding countries, with interannual variation in yields depending on climatic conditions. We compared Bangladesh yield of aus rice, one of the main varieties grown, from official agricultural statistics with Vegetation Health (VH) Indices [Vegetation Condition Index (VCI), Temperature Condition Index (TCI) and Vegetation Health Index (VHI)] computed from Advanced Very High Resolution Radiometer (AVHRR) data covering a period of 15 years (1991–2005). A strong correlation was found between aus rice yield and VCI and VHI during the critical period of aus rice development that occurs during March–April (weeks 8–13 of the year), several months in advance of the rice harvest. Stepwise principal component regression (PCR) was used to construct a model to predict yield as a function of critical-period VHI. The model reduced the yield prediction error variance by 62% compared with a prediction of average yield for each year. Remote sensing is a valuable tool for estimating rice yields well in advance of harvest and at a low cost. PMID:22574057

  4. Study on paddy rice yield estimation based on multisource data and the Grey system theory

    NASA Astrophysics Data System (ADS)

    Deng, Wensheng; Wang, Wei; Liu, Hai; Li, Chen; Ge, Yimin; Zheng, Xianghua

    2009-10-01

    The paddy rice is our important crops. In study of the paddy rice yield estimation, compared with the scholars who usually only take the remote sensing data or meteorology as the influence factors, we combine the remote sensing and the meteorological data to make the monitoring result closer reality. Although the gray system theory has used in many aspects, it is applied very little in paddy rice yield estimation. This study introduces it to the paddy rice yield estimation, and makes the yield estimation model. This can resolve small data sets problem that can not be solved by deterministic model. It selects some regions in Jianghan plain for the study area. The data includes multi-temporal remote sensing image, meteorological and statistic data. The remote sensing data is the 16-day composite images (250-m spatial resolution) of MODIS. The meteorological data includes monthly average temperature, sunshine duration and rain fall amount. The statistical data is the long-term paddy rice yield of the study area. Firstly, it extracts the paddy rice planting area from the multi-temporal MODIS images with the help of GIS and RS. Then taking the paddy rice yield as the reference sequence, MODIS data and meteorological data as the comparative sequence, computing the gray correlative coefficient, it selects the yield estimation factor based on the grey system theory. Finally, using the factors, it establishes the yield estimation model and does the result test. The result indicated that the method is feasible and the conclusion is credible. It can provide the scientific method and reference value to carry on the region paddy rice remote sensing estimation.

  5. Statistical properties of Chinese phonemic networks

    NASA Astrophysics Data System (ADS)

    Yu, Shuiyuan; Liu, Haitao; Xu, Chunshan

    2011-04-01

    The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.

  6. Optimizing rice yields while minimizing yield-scaled global warming potential.

    PubMed

    Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A

    2014-05-01

    To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems. © 2013 John Wiley & Sons Ltd.

  7. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank

  8. We need more replication research - A case for test-retest reliability.

    PubMed

    Leppink, Jimmie; Pérez-Fuster, Patricia

    2017-06-01

    Following debates in psychology on the importance of replication research, we have also started to see pleas for a more prominent role for replication research in medical education. To enable replication research, it is of paramount importance to carefully study the reliability of the instruments we use. Cronbach's alpha has been the most widely used estimator of reliability in the field of medical education, notably as some kind of quality label of test or questionnaire scores based on multiple items or of the reliability of assessment across exam stations. However, as this narrative review outlines, Cronbach's alpha or alternative reliability statistics may complement but not replace psychometric methods such as factor analysis. Moreover, multiple-item measurements should be preferred above single-item measurements, and when using single-item measurements, coefficients as Cronbach's alpha should not be interpreted as indicators of the reliability of a single item when that item is administered after fundamentally different activities, such as learning tasks that differ in content. Finally, if we want to follow up on recent pleas for more replication research, we have to start studying the test-retest reliability of the instruments we use.

  9. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    PubMed

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  10. Sample-Averaged Biexciton Quantum Yield Measured by Solution-Phase Photon Correlation

    PubMed Central

    Beyler, Andrew P.; Bischof, Thomas S.; Cui, Jian; Coropceanu, Igor; Harris, Daniel K.; Bawendi, Moungi G.

    2015-01-01

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals. PMID:25409496

  11. Recalibration and predictive reliability of a solute-transport model of an irrigated stream-aquifer system

    USGS Publications Warehouse

    Person, M.; Konikow, Leonard F.

    1986-01-01

    A solute-transport model of an irrigated stream-aquifer system was recalibrated because of discrepancies between prior predictions of ground-water salinity trends during 1971-1982 and the observed outcome in February 1982. The original model was calibrated with a 1-year record of data collected during 1971-1972 in an 18-km reach of the Arkansas River Valley in southeastern Colorado. The model is improved by incorporating additional hydrologic processes (salt transport through the unsaturated zone) and through reexamination of the reliability of some input data (regression relationship used to estimate salinity from specific conductance data). Extended simulations using the recalibrated model are made to investigate the usefulness of the model for predicting long-term trends of salinity and water levels within the study area. Predicted ground-water levels during 1971-1982 are in good agreement with the observed, indicating that the original 1971-1972 study period was sufficient to calibrate the flow model. However, long-term simulations using the recalibrated model based on recycling the 1971-1972 data alone yield an average ground-water salinity for 1982 that is too low by about 10%. Simulations that incorporate observed surface-water salinity variations yield better results, in that the calculated average ground-water salinity for 1982 is within 3% of the observed value. Statistical analysis of temporal salinity variations of the applied surface water indicates that at least a 4-year sampling period is needed to accurately calibrate the transport model. ?? 1986.

  12. Development of modelling algorithm of technological systems by statistical tests

    NASA Astrophysics Data System (ADS)

    Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.

    2018-03-01

    The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.

  13. Methodological difficulties of conducting agroecological studies from a statistical perspective

    USDA-ARS?s Scientific Manuscript database

    Statistical methods for analysing agroecological data might not be able to help agroecologists to solve all of the current problems concerning crop and animal husbandry, but such methods could well help agroecologists to assess, tackle, and resolve several agroecological issues in a more reliable an...

  14. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  15. Fatigue reliability of deck structures subjected to correlated crack growth

    NASA Astrophysics Data System (ADS)

    Feng, G. Q.; Garbatov, Y.; Guedes Soares, C.

    2013-12-01

    The objective of this work is to analyse fatigue reliability of deck structures subjected to correlated crack growth. The stress intensity factors of the correlated cracks are obtained by finite element analysis and based on which the geometry correction functions are derived. The Monte Carlo simulations are applied to predict the statistical descriptors of correlated cracks based on the Paris-Erdogan equation. A probabilistic model of crack growth as a function of time is used to analyse the fatigue reliability of deck structures accounting for the crack propagation correlation. A deck structure is modelled as a series system of stiffened panels, where a stiffened panel is regarded as a parallel system composed of plates and are longitudinal. It has been proven that the method developed here can be conveniently applied to perform the fatigue reliability assessment of structures subjected to correlated crack growth.

  16. Structural vulnerability assessment using reliability of slabs in avalanche area

    NASA Astrophysics Data System (ADS)

    Favier, Philomène; Bertrand, David; Eckert, Nicolas; Naaim, Mohamed

    2013-04-01

    Improvement of risk assessment or hazard zoning requires a better understanding of the physical vulnerability of structures. To consider natural hazard issue such as snow avalanches, once the flow is characterized, highlight on the mechanical behaviour of the structure is a decisive step. A challenging approach is to quantify the physical vulnerability of impacted structures according to various avalanche loadings. The main objective of this presentation is to introduce methodology and outcomes regarding the assessment of vulnerability of reinforced concrete buildings using reliability methods. Reinforced concrete has been chosen as it is one of the usual material used to build structures exposed to potential avalanche loadings. In avalanche blue zones, structures have to resist to a pressure up to 30kPa. Thus, by providing systematic fragility relations linked to the global failure of the structure, this method may serve the avalanche risk assessment. To do so, a slab was numerically designed. It represented the avalanche facing wall of a house. Different configuration cases of the element in stake have been treated to quantify numerical aspects of the problem, such as the boundary conditions or the mechanical behaviour of the structure. The structure is analysed according to four different limit states, semi-local and global failures are considered to describe the slab behaviour. The first state is attained when cracks appear in the tensile zone, then the two next states are described consistent with the Eurocode, the final state is the total collapse of the structure characterized by the yield line theory. Failure probability is estimated in accordance to the reliability framework. Monte Carlo simulations were conducted to quantify the fragility to different loadings. Sensitivity of models in terms of input distributions were defined with statistical tools such as confidence intervals and Sobol's indexes. Conclusion and discussion of this work are established to

  17. Comparison of Hyperthermal Ground Laboratory Atomic Oxygen Erosion Yields With Those in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Dill, Grace C.; Loftus, Ryan J.; deGroh, Kim K.; Miller, Sharon K.

    2013-01-01

    The atomic oxygen erosion yields of 26 materials (all polymers except for pyrolytic graphite) were measured in two directed hyperthermal radio frequency (RF) plasma ashers operating at 30 or 35 kHz with air. The hyperthermal asher results were compared with thermal energy asher results and low Earth orbital (LEO) results from the Materials International Space Station Experiment 2 and 7 (MISSE 2 and 7) flight experiments. The hyperthermal testing was conducted to a significant portion of the atomic oxygen fluence similar polymers were exposed to during the MISSE 2 and 7 missions. Comparison of the hyperthermal asher prediction of LEO erosion yields with thermal energy asher erosion yields indicates that except for the fluorocarbon polymers of PTFE and FEP, the hyperthermal energy ashers are a much more reliable predictor of LEO erosion yield than thermal energy asher testing, by a factor of four.

  18. The impact of Global Warming on global crop yields due to changes in pest pressure

    NASA Astrophysics Data System (ADS)

    Battisti, D. S.; Tewksbury, J. J.; Deutsch, C. A.

    2011-12-01

    A billion people currently lack reliable access to sufficient food and almost half of the calories feeding these people come from just three crops: rice, maize, wheat. Insect pests are among the largest factors affecting the yield of these three crops, but models assessing the effects of global warming on crops rarely consider changes in insect pest pressure on crop yields. We use well-established relationships between temperature and insect physiology to project climate-driven changes in pest pressure, defined as integrated population metabolism, for the three major crops. By the middle of this century, under most scenarios, insect pest pressure is projected to increase by more than 50% in temperate areas, while increases in tropical regions will be more modest. Yield relationships indicate that the largest increases in insect pest pressure are likely to occur in areas where yield is greatest, suggesting increased strain on global food markets.

  19. A comparative statistical study of long-term agroclimatic conditions affecting the growth of US winter wheat: Distributions of regional monthly average precipitation on the Great Plains and the state of Maryland and the effect of agroclimatic conditions on yield in the state of Kansas

    NASA Technical Reports Server (NTRS)

    Welker, J.

    1981-01-01

    A histogram analysis of average monthly precipitation over 30 and 84 year periods for both Maryland and Kansas was made and the results compared. A second analysis, a statistical assessment of the effect of average monthly precipitation on Kansas winter wheat yield was made. The data sets covered the three periods of 1941-1970, 1887-1970, and 1887-1921. Analyses of the limited data sets used (only the average monthly precipitation and temperature were correlated against yield) indicated that fall precipitation values, especially those of September and October, were more important to winter wheat yield than were spring values, particularly for the period 1941-1970.

  20. Field warming experiments shed light on the wheat yield response to temperature in China

    PubMed Central

    Zhao, Chuang; Piao, Shilong; Huang, Yao; Wang, Xuhui; Ciais, Philippe; Huang, Mengtian; Zeng, Zhenzhong; Peng, Shushi

    2016-01-01

    Wheat growth is sensitive to temperature, but the effect of future warming on yield is uncertain. Here, focusing on China, we compiled 46 observations of the sensitivity of wheat yield to temperature change (SY,T, yield change per °C) from field warming experiments and 102 SY,T estimates from local process-based and statistical models. The average SY,T from field warming experiments, local process-based models and statistical models is −0.7±7.8(±s.d.)% per °C, −5.7±6.5% per °C and 0.4±4.4% per °C, respectively. Moreover, SY,T is different across regions and warming experiments indicate positive SY,T values in regions where growing-season mean temperature is low, and water supply is not limiting, and negative values elsewhere. Gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project appear to capture the spatial pattern of SY,T deduced from warming observations. These results from local manipulative experiments could be used to improve crop models in the future. PMID:27853151

  1. The Necessity of the Hippocampus for Statistical Learning

    PubMed Central

    Covington, Natalie V.; Brown-Schmidt, Sarah; Duff, Melissa C.

    2018-01-01

    Converging evidence points to a role for the hippocampus in statistical learning, but open questions about its necessity remain. Evidence for necessity comes from Schapiro and colleagues who report that a single patient with damage to hippocampus and broader medial temporal lobe cortex was unable to discriminate new from old sequences in several statistical learning tasks. The aim of the current study was to replicate these methods in a larger group of patients who have either damage localized to hippocampus or a broader medial temporal lobe damage, to ascertain the necessity of the hippocampus in statistical learning. Patients with hippocampal damage consistently showed less learning overall compared with healthy comparison participants, consistent with an emerging consensus for hippocampal contributions to statistical learning. Interestingly, lesion size did not reliably predict performance. However, patients with hippocampal damage were not uniformly at chance and demonstrated above-chance performance in some task variants. These results suggest that hippocampus is necessary for statistical learning levels achieved by most healthy comparison participants but significant hippocampal pathology alone does not abolish such learning. PMID:29308986

  2. Analysis of the Reliability and Validity of a Mentor's Assessment for Principal Internships

    ERIC Educational Resources Information Center

    Koonce, Glenn L.; Kelly, Michael D.

    2014-01-01

    In this study, researchers analyzed the reliability and validity of the mentor's assessment for principal internships at a university in the Southeast region of the United States. The results of the study yielded how trustworthy and dependable the instrument is and the effectiveness of the instrument in the current principal preparation program.…

  3. Test-retest reliability of biodex system 4 pro for isometric ankle-eversion and -inversion measurement.

    PubMed

    Tankevicius, Gediminas; Lankaite, Doanata; Krisciunas, Aleksandras

    2013-08-01

    The lack of knowledge about isometric ankle testing indicates the need for research in this area. to assess test-retest reliability and to determine the optimal position for isometric ankle-eversion and -inversion testing. Test-retest reliability study. Isometric ankle eversion and inversion were assessed in 3 different dynamometer foot-plate positions: 0°, 7°, and 14° of inversion. Two maximal repetitions were performed at each angle. Both limbs were tested (40 ankles in total). The test was performed 2 times with a period of 7 d between the tests. University hospital. The study was carried out on 20 healthy athletes with no history of ankle sprains. Reliability was assessed using intraclass correlation coefficient (ICC2,1); minimal detectable change (MDC) was calculated using a 95% confidence interval. Paired t test was used to measure statistically significant changes, and P <.05 was considered statistically significant. Eversion and inversion peak torques showed high ICCs in all 3 angles (ICC values .87-.96, MDC values 3.09-6.81 Nm). Eversion peak torque was the smallest when testing at the 0° angle and gradually increased, reaching maximum values at 14° angle. The increase of eversion peak torque was statistically significant at 7 ° and 14° of inversion. Inversion peak torque showed an opposite pattern-it was the smallest when measured at the 14° angle and increased at the other 2 angles; statistically significant changes were seen only between measures taken at 0° and 14°. Isometric eversion and inversion testing using the Biodex 4 Pro system is a reliable method. The authors suggest that the angle of 7° of inversion is the best for isometric eversion and inversion testing.

  4. Statistical study of the reliability of oxide-defined stripe cw lasers of (AlGa)As

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ettenberg, M.

    1979-03-01

    In this report, we describe a statistical study of the reliability of oxide-defined stripe-contact cw injection lasers of (AlGa)As. These devices have one facet coated with Al/sub 2/O/sub 3/ and one facet coated with an Al/sub 2/O/sub 3//Si dichroic reflector; the lasers are optimized for cw low-threshold currents at room temperature, with values typically about 50 mA. Lifetests were carried out at 70 /sup 0/C ambient, in the cw mode of operation with about 5 mW output. Previous lifetests showed that the degradation rate followed a 0.95-eV activation energy so the 70 /sup 0/C environment provides a degradation acceleration factormore » of 190 over that at room temperature. We have found that the device failures follow a log-normal distribution, characterized by a mean time before failure of 4200 h and a standard deviation of 1.3. This corresponds to a mean time to failure (MTTF) of 10/sup 6/ h at room temperature. Failure is defined here as the inability of the device to emit 1 mW of stimulated cw output at 70 /sup 0/C, and assumes that optical feedback will be employed to adjust the laser current during operation. If a constant-current drive is envisioned, the failures for a 3-dB drop in light output also follow a log-normal distribution with a similar slope (standard deviation=1.1) and a MTTF of 2000 h at 70 /sup 0/C (500 000 h at room temperature). The failures were found to be mainly due to bulk gradual degradation and not facet or contact failure. Careful study of lasers before and after lifetest showed a significant increase in contact thermal resistance. However, this increase accounts for only a small portion of the nearly 70% increase in room-temperature cw threshold after failure at 70 /sup 0/C. After failure at 70 /sup 0/C, we also noted a degradation in the near-field and associated far-field pattern of the laser.« less

  5. Statistical validity of using ratio variables in human kinetics research.

    PubMed

    Liu, Yuanlong; Schutz, Robert W

    2003-09-01

    The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.

  6. Estimation of reliability and dynamic property for polymeric material at high strain rate using SHPB technique and probability theory

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hyeok; Lee, Ouk Sub; Kim, Hong Min; Choi, Hye Bin

    2008-11-01

    A modified Split Hopkinson Pressure Bar technique with aluminum pressure bars and a pulse shaper technique to achieve a closer impedance match between the pressure bars and the specimen materials such as hot temperature degraded POM (Poly Oxy Methylene) and PP (Poly Propylene). The more distinguishable experimental signals were obtained to evaluate the more accurate dynamic deformation behavior of materials under a high strain rate loading condition. A pulse shaping technique is introduced to reduce the non-equilibrium on the dynamic material response by modulation of the incident wave during a short period of test. This increases the rise time of the incident pulse in the SHPB experiment. For the dynamic stress strain curve obtained from SHPB experiment, the Johnson-Cook model is applied as a constitutive equation. The applicability of this constitutive equation is verified by using the probabilistic reliability estimation method. Two reliability methodologies such as the FORM and the SORM have been proposed. The limit state function(LSF) includes the Johnson-Cook model and applied stresses. The LSF in this study allows more statistical flexibility on the yield stress than a paper published before. It is found that the failure probability estimated by using the SORM is more reliable than those of the FORM/ It is also noted that the failure probability increases with increase of the applied stress. Moreover, it is also found that the parameters of Johnson-Cook model such as A and n, and the applied stress are found to affect the failure probability more severely than the other random variables according to the sensitivity analysis.

  7. The inter and intra rater reliability of the Netball Movement Screening Tool.

    PubMed

    Reid, Duncan A; Vanweerd, Rebecca J; Larmer, Peter J; Kingstone, Rachel

    2015-05-01

    To establish the inter- and intra-rater reliability of the Netball Movement Screening Tool, for screening adolescent female netball players. Inter- and intra-rater reliability study. Forty secondary school netball players were recruited to take part in the study. Twenty subjects were screened simultaneously and independently by two raters to ascertain inter-rater agreement. Twenty subjects were scored by rater one on two occasions, separated by a week, to ascertain intra-rater agreement. Inter and intra-rater agreement was assessed utilising the two-way mixed inter class correlation coefficient and weighted kappa statistics. No significant demographic differences were found between the inter and intra-rater groups of subjects. Inter class correlation coefficients' demonstrated excellent inter-rater (two-way mixed inter class correlation coefficients 0.84, standard error of measurement 0.25) and intra-rater (two-way mixed inter class correlation coefficients 0.96, standard error of measurement 0.13) reliability for the overall Netball Movement Screening Tool score and substantial-excellent (two-way mixed inter class correlation coefficients 1.0-0.65) inter-rater and substantial-excellent intra-rater (two-way mixed inter class correlation coefficients 0.96-0.79) reliability for the component scores of the Netball Movement Screening Tool. Kappa statistic showed substantial to poor inter-rater (k=0.75-0.32) and intra-rater (k=0.77-0.27) agreement for individual tests of the NMST. The Netball Movement Screening Tool may be a reliable screening tool for adolescent netball players; however the individual test scores have low reliability. The screening tool can be administered reliably by raters with similar levels of training in the tool but variable clinical experience. On-going research needs to be undertaken to ascertain whether the Netball Movement Screening Tool is a valid tool in ascertaining increased injury risk for netball players. Copyright © 2014 Sports

  8. Reliability, Validity, and Classification Accuracy of the DSM-5 Diagnostic Criteria for Gambling Disorder and Comparison to DSM-IV.

    PubMed

    Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C

    2016-09-01

    The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.

  9. Amplicon-based metagenomics identified candidate organisms in soils that caused yield decline in strawberry

    PubMed Central

    Xu, Xiangming; Passey, Thomas; Wei, Feng; Saville, Robert; Harrison, Richard J.

    2015-01-01

    A phenomenon of yield decline due to weak plant growth in strawberry was recently observed in non-chemo-fumigated soils, which was not associated with the soil fungal pathogen Verticillium dahliae, the main target of fumigation. Amplicon-based metagenomics was used to profile soil microbiota in order to identify microbial organisms that may have caused the yield decline. A total of 36 soil samples were obtained in 2013 and 2014 from four sites for metagenomic studies; two of the four sites had a yield-decline problem, the other two did not. More than 2000 fungal or bacterial operational taxonomy units (OTUs) were found in these samples. Relative abundance of individual OTUs was statistically compared for differences between samples from sites with or without yield decline. A total of 721 individual comparisons were statistically significant – involving 366 unique bacterial and 44 unique fungal OTUs. Based on further selection criteria, we focused on 34 bacterial and 17 fungal OTUs and found that yield decline resulted probably from one or more of the following four factors: (1) low abundance of Bacillus and Pseudomonas populations, which are well known for their ability of supressing pathogen development and/or promoting plant growth; (2) lack of the nematophagous fungus (Paecilomyces species); (3) a high level of two non-specific fungal root rot pathogens; and (4) wet soil conditions. This study demonstrated the usefulness of an amplicon-based metagenomics approach to profile soil microbiota and to detect differential abundance in microbes. PMID:26504572

  10. Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads

    NASA Technical Reports Server (NTRS)

    Brown, A. M.; McGhee, D. S.

    2003-01-01

    Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.

  11. Reliability of visual and instrumental color matching.

    PubMed

    Igiel, Christopher; Lehmann, Karl Martin; Ghinea, Razvan; Weyhrauch, Michael; Hangx, Ysbrand; Scheller, Herbert; Paravina, Rade D

    2017-09-01

    The aim of this investigation was to evaluate intra-rater and inter-rater reliability of visual and instrumental shade matching. Forty individuals with normal color perception participated in this study. The right maxillary central incisor of a teaching model was prepared and restored with 10 feldspathic all-ceramic crowns of different shades. A shade matching session consisted of the observer (rater) visually selecting the best match by using VITA classical A1-D4 (VC) and VITA Toothguide 3D Master (3D) shade guides and the VITA Easyshade Advance intraoral spectrophotometer (ES) to obtain both VC and 3D matches. Three shade matching sessions were held with 4 to 6 weeks between sessions. Intra-rater reliability was assessed based on the percentage of agreement for the three sessions for the same observer, whereas the inter-rater reliability was calculated as mean percentage of agreement between different observers. The Fleiss' Kappa statistical analysis was used to evaluate visual inter-rater reliability. The mean intra-rater reliability for the visual shade selection was 64(11) for VC and 48(10) for 3D. The corresponding ES values were 96(4) for both VC and 3D. The percentages of observers who matched the same shade with VC and 3D were 55(10) and 43(12), respectively, while corresponding ES values were 88(8) for VC and 92(4) for 3D. The results for visual shade matching exhibited a high to moderate level of inconsistency for both intra-rater and inter-rater comparisons. The VITA Easyshade Advance intraoral spectrophotometer exhibited significantly better reliability compared with visual shade selection. This study evaluates the ability of observers to consistently match the same shade visually and with a dental spectrophotometer in different sessions. The intra-rater and inter-rater reliability (agreement of repeated shade matching) of visual and instrumental tooth color matching strongly suggest the use of color matching instruments as a supplementary tool in

  12. Sediment yields of streams in the Umpqua River Basin, Oregon

    USGS Publications Warehouse

    Curtiss, D.A.

    1975-01-01

    This report summarizes sediment data collected at 11 sites in the Umpqua River basin from 1956 to 1973 and updates a report by C. A. Onions (1969) of estimated sediment yields in the basin from 1956-67.  Onions' report points out that the suspended-sediment data, collected during the 1956-67 period, were insufficient to compute reliable sediment yields.  Therefore, the U.S, Geological Survey, in cooperation with Douglas County, collected additional data from 1969 to 1973 to improve the water discharge-sediment discharge relationships at these sites.  These data are published in "Water resources data for Oregon, Part 2, Water quality records," 1970 through 1973 water years.  In addition to the 10 original sites, data were collected during this period from the Umpqua River near Elkton station, and a summary of the data for that station is included in table 1.

  13. Which statistics should tropical biologists learn?

    PubMed

    Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián

    2011-09-01

    Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.

  14. Between-Person and Within-Person Subscore Reliability: Comparison of Unidimensional and Multidimensional IRT Models

    ERIC Educational Resources Information Center

    Bulut, Okan

    2013-01-01

    The importance of subscores in educational and psychological assessments is undeniable. Subscores yield diagnostic information that can be used for determining how each examinee's abilities/skills vary over different content domains. One of the most common criticisms about reporting and using subscores is insufficient reliability of subscores.…

  15. Statistical properties of two sine waves in Gaussian noise.

    NASA Technical Reports Server (NTRS)

    Esposito, R.; Wilson, L. R.

    1973-01-01

    A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).

  16. Reliability Generalization (RG) Analysis: The Test Is Not Reliable

    ERIC Educational Resources Information Center

    Warne, Russell

    2008-01-01

    Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

  17. Reliability and diagnostic accuracy of history and physical examination for diagnosing glenoid labral tears.

    PubMed

    Walsworth, Matthew K; Doukas, William C; Murphy, Kevin P; Mielcarek, Billie J; Michener, Lori A

    2008-01-01

    Glenoid labral tears provide a diagnostic challenge. Combinations of items in the patient history and physical examination will provide stronger diagnostic accuracy to suggest the presence or absence of glenoid labral tear than will individual items. Cohort study (diagnosis); Level of evidence, 1. History and examination findings in patients with shoulder pain (N = 55) were compared with arthroscopic findings to determine diagnostic accuracy and intertester reliability. The intertester reliability of the crank, anterior slide, and active compression tests was 0.20 to 0.24. A combined history of popping or catching and positive crank or anterior slide results yielded specificities of 0.91 and 1.00 and positive likelihood ratios of 3.0 and infinity, respectively. A positive anterior slide result combined with either a positive active compression or crank result yielded specificities of 0.91 and positive likelihood ratio of 2.75 and 3.75, respectively. Requiring only a single positive finding in the combination of popping or catching and the anterior slide or crank yielded sensitivities of 0.82 and 0.89 and negative likelihood ratios of 0.31 and 0.33, respectively. The diagnostic accuracy of individual tests in previous studies is quite variable, which may be explained in part by the modest reliability of these tests. The combination of popping or catching with a positive crank or anterior slide result or a positive anterior slide result with a positive active compression or crank test result suggests the presence of a labral tear. The combined absence of popping or catching and a negative anterior slide or crank result suggests the absence of a labral tear.

  18. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  19. Reliability study of refractory gate gallium arsenide MESFETS

    NASA Technical Reports Server (NTRS)

    Yin, J. C. W.; Portnoy, W. M.

    1981-01-01

    Refractory gate MESFET's were fabricated as an alternative to aluminum gate devices, which have been found to be unreliable as RF power amplifiers. In order to determine the reliability of the new structures, statistics of failure and information about mechanisms of failure in refractory gate MESFET's are given. Test transistors were stressed under conditions of high temperature and forward gate current to enhance failure. Results of work at 150 C and 275 C are reported.

  20. Reliability study of refractory gate gallium arsenide MESFETS

    NASA Astrophysics Data System (ADS)

    Yin, J. C. W.; Portnoy, W. M.

    Refractory gate MESFET's were fabricated as an alternative to aluminum gate devices, which have been found to be unreliable as RF power amplifiers. In order to determine the reliability of the new structures, statistics of failure and information about mechanisms of failure in refractory gate MESFET's are given. Test transistors were stressed under conditions of high temperature and forward gate current to enhance failure. Results of work at 150 C and 275 C are reported.

  1. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  2. The Brandeis Dice Problem and Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    van Enk, Steven J.

    2014-11-01

    Jaynes invented the Brandeis Dice Problem as a simple illustration of the MaxEnt (Maximum Entropy) procedure that he had demonstrated to work so well in Statistical Mechanics. I construct here two alternative solutions to his toy problem. One, like Jaynes' solution, uses MaxEnt and yields an analog of the canonical ensemble, but at a different level of description. The other uses Bayesian updating and yields an analog of the micro-canonical ensemble. Both, unlike Jaynes' solution, yield error bars, whose operational merits I discuss. These two alternative solutions are not equivalent for the original Brandeis Dice Problem, but become so in what must, therefore, count as the analog of the thermodynamic limit, M-sided dice with M → ∞. Whereas the mathematical analogies between the dice problem and Stat Mech are quite close, there are physical properties that the former lacks but that are crucial to the workings of the latter. Stat Mech is more than just MaxEnt.

  3. A critique of Rasch residual fit statistics.

    PubMed

    Karabatsos, G

    2000-01-01

    In test analysis involving the Rasch model, a large degree of importance is placed on the "objective" measurement of individual abilities and item difficulties. The degree to which the objectivity properties are attained, of course, depends on the degree to which the data fit the Rasch model. It is therefore important to utilize fit statistics that accurately and reliably detect the person-item response inconsistencies that threaten the measurement objectivity of persons and items. Given this argument, it is somewhat surprising that there is far more emphasis placed in the objective measurement of person and items than there is in the measurement quality of Rasch fit statistics. This paper provides a critical analysis of the residual fit statistics of the Rasch model, arguably the most often used fit statistics, in an effort to illustrate that the task of Rasch fit analysis is not as simple and straightforward as it appears to be. The faulty statistical properties of the residual fit statistics do not allow either a convenient or a straightforward approach to Rasch fit analysis. For instance, given a residual fit statistic, the use of a single minimum critical value for misfit diagnosis across different testing situations, where the situations vary in sample and test properties, leads to both the overdetection and underdetection of misfit. To improve this situation, it is argued that psychometricians need to implement residual-free Rasch fit statistics that are based on the number of Guttman response errors, or use indices that are statistically optimal in detecting measurement disturbances.

  4. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    PubMed Central

    Hallgren, Kevin A.

    2012-01-01

    Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR. PMID:22833776

  5. Declining water yield from forested mountain watersheds in response to climate change and forest mesophication

    Treesearch

    Peter V. Caldwell; Chelcy F. Miniat; Katherine J. Elliott; Wayne. T. Swank; Steven T. Brantley; Stephanie H. Laseter

    2016-01-01

    Climate change and forest disturbances are threatening the ability of forested mountain watersheds to provide the clean, reliable, and abundant fresh water necessary to support aquatic ecosystems and a growing human population. Here we used 76 years of water yield, climate, and field plot vegetation measurements in six unmanaged, reference watersheds in the southern...

  6. Relationships of concentrations of certain blood constituents with milk yield and age of cows in dairy herds.

    PubMed

    Kitchenham, B A; Rowlands, G J; Shorbagi, H

    1975-05-01

    Regression analyses were performed on data from 48 Compton metabolic profile tests relating the concentrations of certain constituents in the blood of dairy cows to their milk yield, age and stage of lactation. The common partial regression coefficients for milk yield, age and stage of lactation were estimated for each blood constituent. The relationships of greatest statistical significance were between the concentrations of inorganic phosphate and globulin and age, and the concentration of albumin and milk yield.

  7. Improving the Validity and Reliability of a Health Promotion Survey for Physical Therapists

    PubMed Central

    Stephens, Jaca L.; Lowman, John D.; Graham, Cecilia L.; Morris, David M.; Kohler, Connie L.; Waugh, Jonathan B.

    2013-01-01

    Purpose Physical therapists (PTs) have a unique opportunity to intervene in the area of health promotion. However, no instrument has been validated to measure PTs’ views on health promotion in physical therapy practice. The purpose of this study was to evaluate the content validity and test-retest reliability of a health promotion survey designed for PTs. Methods An expert panel of PTs assessed the content validity of “The Role of Health Promotion in Physical Therapy Survey” and provided suggestions for revision. Item content validity was assessed using the content validity ratio (CVR) as well as the modified kappa statistic. Therapists then participated in the test-retest reliability assessment of the revised health promotion survey, which was assessed using a weighted kappa statistic. Results Based on feedback from the expert panelists, significant revisions were made to the original survey. The expert panel reached at least a majority consensus agreement for all items in the revised survey and the survey-CVR improved from 0.44 to 0.66. Only one item on the revised survey had substantial test-retest agreement, with 55% of the items having moderate agreement and 43% poor agreement. Conclusions All items on the revised health promotion survey demonstrated at least fair validity, but few items had reasonable test-retest reliability. Further modifications should be made to strengthen the validity and improve the reliability of this survey. PMID:23754935

  8. Specific yield: compilation of specific yields for various materials

    USGS Publications Warehouse

    Johnson, A.I.

    1967-01-01

    Specific yield is defined as the ratio of (1) the volume of water that a saturated rock or soil will yield by gravity to (2) the total volume of the rock or soft. Specific yield is usually expressed as a percentage. The value is not definitive, because the quantity of water that will drain by gravity depends on variables such as duration of drainage, temperature, mineral composition of the water, and various physical characteristics of the rock or soil under consideration. Values of specific yields nevertheless offer a convenient means by which hydrologists can estimate the water-yielding capacities of earth materials and, as such, are very useful in hydrologic studies. The present report consists mostly of direct or modified quotations from many selected reports that present and evaluate methods for determining specific yield, limitations of those methods, and results of the determinations made on a wide variety of rock and soil materials. Although no particular values are recommended in this report, a table summarizes values of specific yield, and their averages, determined for 10 rock textures. The following is an abstract of the table. [Table

  9. Yield and yield gaps in central U.S. corn production systems

    USDA-ARS?s Scientific Manuscript database

    The magnitude of yield gaps (YG) (potential yield – farmer yield) provides some indication of the prospects for increasing crop yield. Quantile regression analysis was applied to county maize (Zea mays L.) yields (1972 – 2011) from Kentucky, Iowa and Nebraska (irrigated) (total of 115 counties) to e...

  10. Reliability and validity of Champion's Health Belief Model Scale for breast cancer screening among Malaysian women.

    PubMed

    Parsa, P; Kandiah, M; Mohd Nasir, M T; Hejar, A R; Nor Afiah, M Z

    2008-11-01

    Breast cancer is the leading cause of cancer deaths in Malaysian women, and the use of breast self-examination (BSE), clinical breast examination (CBE) and mammography remain low in Malaysia. Therefore, there is a need to develop a valid and reliable tool to measure the beliefs that influence breast cancer screening practices. The Champion's Health Belief Model Scale (CHBMS) is a valid and reliable tool to measure beliefs about breast cancer and screening methods in the Western culture. The purpose of this study was to translate the use of CHBMS into the Malaysian context and validate the scale among Malaysian women. A random sample of 425 women teachers was taken from 24 secondary schools in Selangor state, Malaysia. The CHBMS was translated into the Malay language, validated by an expert's panel, back translated, and pretested. Analyses included descriptive statistics of all the study variables, reliability estimates, and construct validity using factor analysis. The mean age of the respondents was 37.2 (standard deviation 7.1) years. Factor analysis yielded ten factors for BSE with eigenvalue greater than 1 (four factors more than the original): confidence 1 (ability to differentiate normal and abnormal changes in the breasts), barriers to BSE, susceptibility for breast cancer, benefits of BSE, health motivation 1 (general health), seriousness 1 (fear of breast cancer), confidence 2 (ability to detect size of lumps), seriousness 2 (fear of long-term effects of breast cancer), health motivation 2 (preventive health practice), and confidence 3 (ability to perform BSE correctly). For CBE and mammography scales, seven factors each were identified. Factors for CBE scale include susceptibility, health motivation 1, benefits of CBE, seriousness 1, barriers of CBE, seriousness 2 and health motivation 2. For mammography the scale includes benefits of mammography, susceptibility, health motivation 1, seriousness 1, barriers to mammography seriousness 2 and health

  11. Statistical trend analysis and extreme distribution of significant wave height from 1958 to 1999 - an application to the Italian Seas

    NASA Astrophysics Data System (ADS)

    Martucci, G.; Carniel, S.; Chiggiato, J.; Sclavo, M.; Lionello, P.; Galati, M. B.

    2010-06-01

    The study is a statistical analysis of sea states timeseries derived using the wave model WAM forced by the ERA-40 dataset in selected areas near the Italian coasts. For the period 1 January 1958 to 31 December 1999 the analysis yields: (i) the existence of a negative trend in the annual- and winter-averaged sea state heights; (ii) the existence of a turning-point in late 80's in the annual-averaged trend of sea state heights at a site in the Northern Adriatic Sea; (iii) the overall absence of a significant trend in the annual-averaged mean durations of sea states over thresholds; (iv) the assessment of the extreme values on a time-scale of thousand years. The analysis uses two methods to obtain samples of extremes from the independent sea states: the r-largest annual maxima and the peak-over-threshold. The two methods show statistical differences in retrieving the return values and more generally in describing the significant wave field. The r-largest annual maxima method provides more reliable predictions of the extreme values especially for small return periods (<100 years). Finally, the study statistically proves the existence of decadal negative trends in the significant wave heights and by this it conveys useful information on the wave climatology of the Italian seas during the second half of the 20th century.

  12. Product manufacturing, quality, and reliability initiatives to maintain a competitive advantage and meet customer expectations in the semiconductor industry

    NASA Astrophysics Data System (ADS)

    Capps, Gregory

    Semiconductor products are manufactured and consumed across the world. The semiconductor industry is constantly striving to manufacture products with greater performance, improved efficiency, less energy consumption, smaller feature sizes, thinner gate oxides, and faster speeds. Customers have pushed towards zero defects and require a more reliable, higher quality product than ever before. Manufacturers are required to improve yields, reduce operating costs, and increase revenue to maintain a competitive advantage. Opportunities exist for integrated circuit (IC) customers and manufacturers to work together and independently to reduce costs, eliminate waste, reduce defects, reduce warranty returns, and improve quality. This project focuses on electrical over-stress (EOS) and re-test okay (RTOK), two top failure return mechanisms, which both make great defect reduction opportunities in customer-manufacturer relationship. Proactive continuous improvement initiatives and methodologies are addressed with emphasis on product life cycle, manufacturing processes, test, statistical process control (SPC), industry best practices, customer education, and customer-manufacturer interaction.

  13. Evaluating test-retest reliability in patient-reported outcome measures for older people: A systematic review.

    PubMed

    Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju

    2018-03-01

    This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in

  14. Reliability of horizontal and vertical tube shift techniques in the localisation of supernumerary teeth.

    PubMed

    Mallineni, S K; Anthonappa, R P; King, N M

    2016-12-01

    To assess the reliability of the vertical tube shift technique (VTST) and horizontal tube shift technique (HTST) for the localisation of unerupted supernumerary teeth (ST) in the anterior region of the maxilla. A convenience sample of 83 patients who attended a major teaching hospital because of unerupted ST was selected. Only non-syndromic patients with ST and who had complete clinical and radiographic and surgical records were included in the study. Ten examiners independently rated the paired set of radiographs for each technique. Chi-square test, paired t test and kappa statistics were employed to assess the intra- and inter-examiner reliability. Paired sets of 1660 radiographs (830 pairs for each technique) were available for the analysis. The overall sensitivity for VTST and HTST was 80.6 and 72.1% respectively, with slight inter-examiner and good intra-examiner reliability. Statistically significant differences were evident between the two localisation techniques (p < 0.05). Localisation of unerupted ST using VTST was more successful than HTST in the anterior region of the maxilla.

  15. Reliability, reference values and predictor variables of the ulnar sensory nerve in disease free adults.

    PubMed

    Ruediger, T M; Allison, S C; Moore, J M; Wainner, R S

    2014-09-01

    The purposes of this descriptive and exploratory study were to examine electrophysiological measures of ulnar sensory nerve function in disease free adults to determine reliability, determine reference values computed with appropriate statistical methods, and examine predictive ability of anthropometric variables. Antidromic sensory nerve conduction studies of the ulnar nerve using surface electrodes were performed on 100 volunteers. Reference values were computed from optimally transformed data. Reliability was computed from 30 subjects. Multiple linear regression models were constructed from four predictor variables. Reliability was greater than 0.85 for all paired measures. Responses were elicited in all subjects; reference values for sensory nerve action potential (SNAP) amplitude from above elbow stimulation are 3.3 μV and decrement across-elbow less than 46%. No single predictor variable accounted for more than 15% of the variance in the response. Electrophysiologic measures of the ulnar sensory nerve are reliable. Absent SNAP responses are inconsistent with disease free individuals. Reference values recommended in this report are based on appropriate transformations of non-normally distributed data. No strong statistical model of prediction could be derived from the limited set of predictor variables. Reliability analyses combined with relatively low level of measurement error suggest that ulnar sensory reference values may be used with confidence. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  16. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less

  17. Monitoring Crop Yield in USA Using a Satellite-Based Climate-Variability Impact Index

    NASA Technical Reports Server (NTRS)

    Zhang, Ping; Anderson, Bruce; Tan, Bin; Barlow, Mathew; Myneni, Ranga

    2011-01-01

    A quantitative index is applied to monitor crop growth and predict agricultural yield in continental USA. The Climate-Variability Impact Index (CVII), defined as the monthly contribution to overall anomalies in growth during a given year, is derived from 1-km MODIS Leaf Area Index. The growing-season integrated CVII can provide an estimate of the fractional change in overall growth during a given year. In turn these estimates can provide fine-scale and aggregated information on yield for various crops. Trained from historical records of crop production, a statistical model is used to produce crop yield during the growing season based upon the strong positive relationship between crop yield and the CVII. By examining the model prediction as a function of time, it is possible to determine when the in-season predictive capability plateaus and which months provide the greatest predictive capacity.

  18. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  19. [Reliability and validity analysis of simplified Chinese version of QOL questionnaire of olfactory disorders].

    PubMed

    Jin, X F; Wang, J; Li, Y J; Liu, J F; Ni, D F

    2016-09-20

    Objective: To cross-culturally translate the questionnaire of olfactory disorders(QOD)into a simplified Chinese version, and evaluate its reliability and validity in clinical. Method: A simplified Chinese version of the QOD was evaluated in test-retest reliability, split-half reliability and internal consistency.Then it was evaluated in validity test including content validity, criterion-related validity, responsibility. Criterion-related validity was using the medical outcome study's 36-item short rorm health survey(SF-36) and the World Health Organization quality of life-brief (WHOQOL-BREF) for comparison. Result: A total of 239 patients with olfactory dysfunction were enrolled and tested, in which 195 patients completed all three surveys(QOD, SF-36, WHOQOL-BREF). The test-retest reliabilities of the QOD-parosmia statements(QOD-P), QOD-quality of life(QOD-QoL), and the QOD-visual simulation(QOD-VAS)sections were 0.799( P <0.01),0.781( P <0.01),0.488( P <0.01), respectively, and the Cronbach' s α coefficients reliability were 0.477,0.812,0.889,respectively.The split-half reliability of QOD-QoL was 0.89. There was no correlation between the QOD-P section and the SF-36, but there were statistically significant correlations between the QOD-QoL and QOD-VAS sections with the SF-36. There was no correlation between the QOD-P section and the WHOQOL-BREF, but there were statistically significant correlations between the QOD-QoL and QOD-VAS sections with the SF-36 in most sections. Conclusion: The simplified Chinese version of the QOD was testified to be a reliable and valid questionnaire for evaluating patients with olfactory dysfunction living in mainland of China.The QOD-P section needs further modifications to properly adapt patients with Chinese cultural and knowledge background. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  20. Sample-Averaged Biexciton Quantum Yield Measured by Solution-Phase Photon Correlation

    DOE PAGES

    Beyler, Andrew P.; Bischof, Thomas S.; Cui, Jian; ...

    2014-11-19

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here in this study, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS andmore » InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.« less

  1. Reliability of reflectance measures in passive filters

    NASA Astrophysics Data System (ADS)

    Saldiva de André, Carmen Diva; Afonso de André, Paulo; Rocha, Francisco Marcelo; Saldiva, Paulo Hilário Nascimento; Carvalho de Oliveira, Regiani; Singer, Julio M.

    2014-08-01

    Measurements of optical reflectance in passive filters impregnated with a reactive chemical solution may be transformed to ozone concentrations via a calibration curve and constitute a low cost alternative for environmental monitoring, mainly to estimate human exposure. Given the possibility of errors caused by exposure bias, it is common to consider sets of m filters exposed during a certain period to estimate the latent reflectance on n different sample occasions at a certain location. Mixed models with sample occasions as random effects are useful to analyze data obtained under such setups. The intra-class correlation coefficient of the mean of the m measurements is an indicator of the reliability of the latent reflectance estimates. Our objective is to determine m in order to obtain a pre-specified reliability of the estimates, taking possible outliers into account. To illustrate the procedure, we consider an experiment conducted at the Laboratory of Experimental Air Pollution, University of São Paulo, Brazil (LPAE/FMUSP), where sets of m = 3 filters were exposed during 7 days on n = 9 different occasions at a certain location. The results show that the reliability of the latent reflectance estimates for each occasion obtained under homoskedasticity is km = 0.74. A residual analysis suggests that the within-occasion variance for two of the occasions should be different from the others. A refined model with two within-occasion variance components was considered, yielding km = 0.56 for these occasions and km = 0.87 for the remaining ones. To guarantee that all estimates have a reliability of at least 80% we require measurements on m = 10 filters on each occasion.

  2. Receptor arrays optimized for natural odor statistics.

    PubMed

    Zwicker, David; Murugan, Arvind; Brenner, Michael P

    2016-05-17

    Natural odors typically consist of many molecules at different concentrations. It is unclear how the numerous odorant molecules and their possible mixtures are discriminated by relatively few olfactory receptors. Using an information theoretic model, we show that a receptor array is optimal for this task if it achieves two possibly conflicting goals: (i) Each receptor should respond to half of all odors and (ii) the response of different receptors should be uncorrelated when averaged over odors presented with natural statistics. We use these design principles to predict statistics of the affinities between receptors and odorant molecules for a broad class of odor statistics. We also show that optimal receptor arrays can be tuned to either resolve concentrations well or distinguish mixtures reliably. Finally, we use our results to predict properties of experimentally measured receptor arrays. Our work can thus be used to better understand natural olfaction, and it also suggests ways to improve artificial sensor arrays.

  3. The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.

  4. Satellite techniques yield insight into devastating rainfall from Hurricane Mitch

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Vicente, G.; Ba, M.; Gruber, A.; Scofield, R.; Li, Q.; Weldon, R.

    Hurricane Mitch may prove to be one of the most devastating tropical cyclones to affect the western hemisphere. Heavy rains over Central America from October 28, 1998, to November 1, 1998, caused widespread flooding and mud slides in Nicaragua and Honduras resulting in thousands of deaths and missing persons. News reports indicated entire towns being swept away, destruction of national economies and infrastructure, and widespread disease in the aftermath of the storm, which some estimates suggested dropped as much as 1300 mm of rain.However, in view of the widespread damage it is difficult to determine the actual amounts and distribution of rainfall. More accurate means of determining the rainfall associated with Mitch are vital for diagnosing and understanding the evolution of this disaster and for developing new mitigation strategies for future tropical cyclones. Satellite data may prove to be a reliable resource for accurate rainfall analysis and have yielded apparently reliable figures for Hurricane Mitch.

  5. Differences, or lack thereof, in wheat and maize yields under three low-warming scenarios

    NASA Astrophysics Data System (ADS)

    Tebaldi, Claudia; Lobell, David

    2018-06-01

    The availability of climate model experiments under three alternative scenarios stabilizing at warming targets inspired by the COP21 agreements (a 1.5 °C not exceed, a 1.5 °C with overshoot and a 2.0 °C) makes it possible to assess future expected changes in global yields for two staple crops, wheat and maize. In this study an empirical model of the relation between crop yield anomalies and temperature and precipitation changes, with or without the inclusion of CO2 fertilization effects, is used to produce ensembles of time series of yield outcomes on a yearly basis over the course of the 21st century, for each scenario. The 21st century is divided into 10 year windows starting from 2020, within which the statistical significance and the magnitude of the differences in yield changes between pairs of scenarios are assessed, thus evaluating if and when benefits of mitigations appear, and how substantial they are. Additionally, a metric of extreme heat tailored to the individual crops (number of days during the growing season above a crop-specific threshold) is used to measure exposure to harmful temperatures under the different scenarios. If CO2 effects are not included, statistically significant differences in yields of both crops appear as early as the 2030s but the magnitude of the differences remains below 3% of the historical baseline in all cases until the second part of the century. In the later decades of the 21st century, differences remain small and eventually stop being statistically significant between the two scenarios stabilizing at 1.5 °C, while differences between these two lower scenarios and the 2.0 °C scenario grow to about 4%. The inclusion of CO2 effects erases all significant benefits of mitigation for wheat, while the significance of differences is maintained for maize yields between the higher and the two lower scenarios, albeit with smaller benefits in magnitude. Changes in extremes are significant within each of the scenarios

  6. Stress Rupture Life Reliability Measures for Composite Overwrapped Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Thesken, John C.; Phoenix, S. Leigh; Grimes-Ledesma, Lorie

    2007-01-01

    Composite Overwrapped Pressure Vessels (COPVs) are often used for storing pressurant gases onboard spacecraft. Kevlar (DuPont), glass, carbon and other more recent fibers have all been used as overwraps. Due to the fact that overwraps are subjected to sustained loads for an extended period during a mission, stress rupture failure is a major concern. It is therefore important to ascertain the reliability of these vessels by analysis, since the testing of each flight design cannot be completed on a practical time scale. The present paper examines specifically a Weibull statistics based stress rupture model and considers the various uncertainties associated with the model parameters. The paper also examines several reliability estimate measures that would be of use for the purpose of recertification and for qualifying flight worthiness of these vessels. Specifically, deterministic values for a point estimate, mean estimate and 90/95 percent confidence estimates of the reliability are all examined for a typical flight quality vessel under constant stress. The mean and the 90/95 percent confidence estimates are computed using Monte-Carlo simulation techniques by assuming distribution statistics of model parameters based also on simulation and on the available data, especially the sample sizes represented in the data. The data for the stress rupture model are obtained from the Lawrence Livermore National Laboratories (LLNL) stress rupture testing program, carried out for the past 35 years. Deterministic as well as probabilistic sensitivities are examined.

  7. Comparative analysis of positive and negative attitudes toward statistics

    NASA Astrophysics Data System (ADS)

    Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah

    2015-02-01

    Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.

  8. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  9. Reliability and validity of the Outcome Expectations for Exercise Scale-2.

    PubMed

    Resnick, Barbara

    2005-10-01

    Development of a reliable and valid measure of outcome expectations for exercise for older adults will help establish the relationship between outcome expectations and exercise and facilitate the development of interventions to increase physical activity in older adults. The purpose of this study was to test the reliability and validity of the Outcome Expectations for Exercise-2 Scale (OEE-2), a 13-item measure with two subscales: positive OEE (POEE) and negative OEE (NOEE). The OEE-2 scale was given to 161 residents in a continuing-care retirement community. There was some evidence of validity based on confirmatory factor analysis, Rasch-analysis INFIT and OUTFIT statistics, and convergent validity and test criterion relationships. There was some evidence for reliability of the OEE-2 based on alpha coefficients, person- and item-separation reliability indexes, and R(2)values. Based on analyses, suggested revisions are provided for future use of the OEE-2. Although ongoing reliability and validity testing are needed, the OEE-2 scale can be used to identify older adults with low outcome expectations for exercise, and interventions can then be implemented to strengthen these expectations and improve exercise behavior.

  10. Self-assessed performance improves statistical fusion of image labels

    PubMed Central

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    . Statistical fusion resulted in statistically indistinguishable performance from self-assessed weighted voting. The authors developed a new theoretical basis for using self-assessed performance in the framework of statistical fusion and demonstrated that the combined sources of information (both statistical assessment and self-assessment) yielded statistically significant improvement over the methods considered separately. Conclusions: The authors present the first systematic characterization of self-assessed performance in manual labeling. The authors demonstrate that self-assessment and statistical fusion yield similar, but complementary, benefits for label fusion. Finally, the authors present a new theoretical basis for combining self-assessments with statistical label fusion. PMID:24593721

  11. WaferOptics® mass volume production and reliability

    NASA Astrophysics Data System (ADS)

    Wolterink, E.; Demeyer, K.

    2010-05-01

    The Anteryon WaferOptics® Technology platform contains imaging optics designs, materials, metrologies and combined with wafer level based Semicon & MEMS production methods. WaferOptics® first required complete new system engineering. This system closes the loop between application requirement specifications, Anteryon product specification, Monte Carlo Analysis, process windows, process controls and supply reject criteria. Regarding the Anteryon product Integrated Lens Stack (ILS), new design rules, test methods and control systems were assessed, implemented, validated and customer released for mass production. This includes novel reflowable materials, mastering process, replication, bonding, dicing, assembly, metrology, reliability programs and quality assurance systems. Many of Design of Experiments were performed to assess correlations between optical performance parameters and machine settings of all process steps. Lens metrologies such as FFL, BFL, and MTF were adapted for wafer level production and wafer mapping was introduced for yield management. Test methods for screening and validating suitable optical materials were designed. Critical failure modes such as delamination and popcorning were assessed and modeled with FEM. Anteryon successfully managed to integrate the different technologies starting from single prototypes to high yield mass volume production These parallel efforts resulted in a steep yield increase from 30% to over 90% in a 8 months period.

  12. Why is it so difficult to determine the yield of indoor cannabis plantations? A case study from the Netherlands.

    PubMed

    Vanhove, Wouter; Maalsté, Nicole; Van Damme, Patrick

    2017-07-01

    Together, the Netherlands and Belgium are the largest indoor cannabis producing countries in Europe. In both countries, legal prosecution procedure of convicted illicit cannabis growers usually includes recovery of the profits gained. However, it is not easy to make a reliable estimation of the latter profits, due to the wide range of factors that determine indoor cannabis yields and eventual selling prices. In the Netherlands, since 2005, a reference model is used that assumes a constant yield (g) per plant for a given indoor cannabis plant density. Later, in 2011, a new model was developed in Belgium for yield estimation of Belgian indoor cannabis plantations that assumes a constant yield per m 2 of growth surface, provided that a number of growth conditions are met. Indoor cannabis plantations in the Netherlands and Belgium share similar technical characteristics. As a result, for indoor cannabis plantations in both countries, both aforementioned yield estimation models should yield similar yield estimations. By means of a real-case study from the Netherlands, we show that the reliability of both models is hampered by a number of flaws and unmet preconditions. The Dutch model is based on a regression equation that makes use of ill-defined plant development stages, assumes a linear plant growth, does not discriminate between different plantation size categories and does not include other important yield determining factors (such as fertilization). The Belgian model addresses some of the latter shortcomings, but its applicability is constrained by a number of pre-conditions including plantation size between 50 and 1000 plants; cultivation in individual pots with peat soil; 600W (electrical power) assimilation lamps; constant temperature between 20°C and 30°C; adequate fertilizer application and plants unaffected by pests and diseases. Judiciary in both the Netherlands and Belgium require robust indoor cannabis yield models for adequate legal prosecution of

  13. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  14. Reliability of conditioned pain modulation: a systematic review

    PubMed Central

    Kennedy, Donna L.; Kemp, Harriet I.; Ridout, Deborah; Yarnitsky, David; Rice, Andrew S.C.

    2016-01-01

    Abstract A systematic literature review was undertaken to determine if conditioned pain modulation (CPM) is reliable. Longitudinal, English language observational studies of the repeatability of a CPM test paradigm in adult humans were included. Two independent reviewers assessed the risk of bias in 6 domains; study participation; study attrition; prognostic factor measurement; outcome measurement; confounding and analysis using the Quality in Prognosis Studies (QUIPS) critical assessment tool. Intraclass correlation coefficients (ICCs) less than 0.4 were considered to be poor; 0.4 and 0.59 to be fair; 0.6 and 0.75 good and greater than 0.75 excellent. Ten studies were included in the final review. Meta-analysis was not appropriate because of differences between studies. The intersession reliability of the CPM effect was investigated in 8 studies and reported as good (ICC = 0.6-0.75) in 3 studies and excellent (ICC > 0.75) in subgroups in 2 of those 3. The assessment of risk of bias demonstrated that reporting is not comprehensive for the description of sample demographics, recruitment strategy, and study attrition. The absence of blinding, a lack of control for confounding factors, and lack of standardisation in statistical analysis are common. Conditioned pain modulation is a reliable measure; however, the degree of reliability is heavily dependent on stimulation parameters and study methodology and this warrants consideration for investigators. The validation of CPM as a robust prognostic factor in experimental and clinical pain studies may be facilitated by improvements in the reporting of CPM reliability studies. PMID:27559835

  15. 76 FR 12140 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-04

    ... provides useful insights on perceptions and opinions, but are not statistical surveys that yield quantitative results that can be generalized to the population of study. This feedback will provide insights... used for quantitative information collections that are designed to yield reliably actionable results...

  16. Toward an Economic Definition of Sustainable Yield for Coastal Aquifers

    NASA Astrophysics Data System (ADS)

    Jenson, J. W.; Habana, N. C.; Lander, M.

    2016-12-01

    The concept of aquifer sustainable yield has long been criticized, debated, and even disparaged among groundwater hydrologists, but policy-makers and professional water resource managers inevitably ask them for unequivocal answers to such questions as "What is the absolute maximum volume of water that could be sustainably withdrawn from this aquifer?" We submit that it is therefore incumbent upon hydrologists to develop and offer valid practical definitions of sustainable yield that can be usefully applied to given conditions and types of aquifers. In coastal aquifers, water quality—in terms of salinity—is affected by changes in the natural water budget and the volume rate of artificial extraction. In principle, one can identify a family of assay curves for a given aquifer, showing the specific relationships between the quantity and quality of the water extracted under given conditions of recharge. The concept of the assay curve, borrowed from the literature of natural-resource extraction economics, has to our knowledge not yet found its way into the literature of applied hydrology. The relationships between recharge, extraction, and water quality that define the assay curve can be determined empirically from sufficient observations of groundwater response to recharge and extraction and can be estimated from models that have been reliably history-matched ("calibrated") to such data. We thus propose a working definition of sustainable yield for coastal aquifers in terms of the capacity that ultimately could be achieved by an ideal production system, given what is known or can be assumed about the natural limiting conditions. Accordingly, we also offer an approach for defining an ideal production system for a given aquifer, and demonstrate how observational data and/or modeling results can be used to develop assay curves of quality vs. quantity extracted, which can serve as reliable predictive tools for engineers, managers, regulators, and policy

  17. Modeling Ka-band low elevation angle propagation statistics

    NASA Technical Reports Server (NTRS)

    Russell, Thomas A.; Weinfield, John; Pearson, Chris; Ippolito, Louis J.

    1995-01-01

    The statistical variability of the secondary atmospheric propagation effects on satellite communications cannot be ignored at frequencies of 20 GHz or higher, particularly if the propagation margin allocation is such that link availability falls below 99 percent. The secondary effects considered in this paper are gaseous absorption, cloud absorption, and tropospheric scintillation; rain attenuation is the primary effect. Techniques and example results are presented for estimation of the overall combined impact of the atmosphere on satellite communications reliability. Statistical methods are employed throughout and the most widely accepted models for the individual effects are used wherever possible. The degree of correlation between the effects is addressed and some bounds on the expected variability in the combined effects statistics are derived from the expected variability in correlation. Example estimates are presented of combined effects statistics in the Washington D.C. area of 20 GHz and 5 deg elevation angle. The statistics of water vapor are shown to be sufficient for estimation of the statistics of gaseous absorption at 20 GHz. A computer model based on monthly surface weather is described and tested. Significant improvement in prediction of absorption extremes is demonstrated with the use of path weather data instead of surface data.

  18. Fast Metabolite Identification in Nuclear Magnetic Resonance Metabolomic Studies: Statistical Peak Sorting and Peak Overlap Detection for More Reliable Database Queries.

    PubMed

    Hoijemberg, Pablo A; Pelczer, István

    2018-01-05

    A lot of time is spent by researchers in the identification of metabolites in NMR-based metabolomic studies. The usual metabolite identification starts employing public or commercial databases to match chemical shifts thought to belong to a given compound. Statistical total correlation spectroscopy (STOCSY), in use for more than a decade, speeds the process by finding statistical correlations among peaks, being able to create a better peak list as input for the database query. However, the (normally not automated) analysis becomes challenging due to the intrinsic issue of peak overlap, where correlations of more than one compound appear in the STOCSY trace. Here we present a fully automated methodology that analyzes all STOCSY traces at once (every peak is chosen as driver peak) and overcomes the peak overlap obstacle. Peak overlap detection by clustering analysis and sorting of traces (POD-CAST) first creates an overlap matrix from the STOCSY traces, then clusters the overlap traces based on their similarity and finally calculates a cumulative overlap index (COI) to account for both strong and intermediate correlations. This information is gathered in one plot to help the user identify the groups of peaks that would belong to a single molecule and perform a more reliable database query. The simultaneous examination of all traces reduces the time of analysis, compared to viewing STOCSY traces by pairs or small groups, and condenses the redundant information in the 2D STOCSY matrix into bands containing similar traces. The COI helps in the detection of overlapping peaks, which can be added to the peak list from another cross-correlated band. POD-CAST overcomes the generally overlooked and underestimated presence of overlapping peaks and it detects them to include them in the search of all compounds contributing to the peak overlap, enabling the user to accelerate the metabolite identification process with more successful database queries and searching all tentative

  19. SPATIO-TEMPORAL MODELING OF AGRICULTURAL YIELD DATA WITH AN APPLICATION TO PRICING CROP INSURANCE CONTRACTS

    PubMed Central

    Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo

    2009-01-01

    This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450

  20. The shared and unique values of optical, fluorescence, thermal and microwave satellite data for estimating large-scale crop yields

    USDA-ARS?s Scientific Manuscript database

    Large-scale crop monitoring and yield estimation are important for both scientific research and practical applications. Satellite remote sensing provides an effective means for regional and global cropland monitoring, particularly in data-sparse regions that lack reliable ground observations and rep...

  1. The reliability and validity of the Caregiver Work Limitations Questionnaire.

    PubMed

    Lerner, Debra; Parsons, Susan K; Chang, Hong; Visco, Zachary L; Pawlecki, J Brent

    2015-01-01

    To test a new Caregiver Work Limitations Questionnaire (WLQ). On the basis of the original WLQ, this new survey instrument assesses the effect of caregiving for ill and/or disabled persons on the caregiver's work performance. A questionnaire was administered anonymously to employees of a large business services company. Scale reliability and validity were tested with psychometric methods. Of 4128 survey participants, 18.3% currently were caregivers, 10.2% were past caregivers, and 71.5% were not caregivers. Current caregivers were limited in their ability to perform basic job tasks between mean 10.3% and 16.8% of the time. Confirmatory factor analysis yielded a scale structure similar to the WLQ's. Scales reliabilities (the Cronbach's α) ranged from 0.91 to 0.95. The Caregiver WLQ is a new tool for understanding the workplace effect of caregiving.

  2. Measurement of fission yields and isomeric yield ratios at IGISOL

    NASA Astrophysics Data System (ADS)

    Pomp, Stephan; Mattera, Andrea; Rakopoulos, Vasileios; Al-Adili, Ali; Lantz, Mattias; Solders, Andreas; Jansson, Kaj; Prokofiev, Alexander V.; Eronen, Tommi; Gorelov, Dimitri; Jokinen, Ari; Kankainen, Anu; Moore, Iain D.; Penttilä, Heikki; Rinta-Antila, Sami

    2018-03-01

    Data on fission yields and isomeric yield ratios (IYR) are tools to study the fission process, in particular the generation of angular momentum. We use the IGISOL facility with the Penning trap JYFLTRAP in Jyväskylä, Finland, for such measurements on 232Th and natU targets. Previously published fission yield data from IGISOL concern the 232Th(p,f) and 238U(p,f) reactions at 25 and 50 MeV. Recently, a neutron source, using the Be(p,n) reaction, has been developed, installed and tested. We summarize the results for (p,f) focusing on the first measurement of IYR by direct ion counting. We also present first results for IYR and relative yields for Sn and Sb isotopes in the 128-133 mass range from natU(n,f) based on γ-spectrometry. We find a staggering behaviour in the cumulative yields for Sn and a shift in the independent fission yields for Sb as compared to current evaluations. Plans for the future experimental program on fission yields and IYR measurements are discussed.

  3. Global evidence of positive impacts of freshwater biodiversity on fishery yields.

    PubMed

    Brooks, Emma Grace Elizabeth; Holland, Robert Alan; Darwall, William Robert Thomas; Eigenbrod, Felix; Tittensor, Derek

    2016-05-01

    An often-invoked benefit of high biodiversity is the provision of ecosystem services. However, evidence for this is largely based on data from small-scale experimental studies of relationships between biodiversity and ecosystem function that may have little relevance to real-world systems. Here, large-scale biodiversity datasets are used to test the relationship between the yield of inland capture fisheries and species richness from 100 countries. Inland waters of Africa, Europe and parts of Asia. A multimodel inference approach was used to assess inland fishery yields at the country level against species richness, waterside human population, area, elevation and various climatic variables, to determine the relative importance of species richness to fisheries yields compared with other major large-scale drivers. Secondly, the mean decadal variation in fishery yields at the country level for 1981-2010 was regressed against species richness to assess if greater diversity reduces the variability in yields over time. Despite a widespread reliance on targeting just a few species of fish, freshwater fish species richness is highly correlated with yield ( R 2  = 0.55) and remains an important and statistically significant predictor of yield once other macroecological drivers are controlled for. Freshwater richness also has a significant negative relationship with variability of yield over time in Africa ( R 2  = 0.16) but no effect in Europe. The management of inland waters should incorporate the protection of freshwater biodiversity, particularly in countries with the highest-yielding inland fisheries as these also tend to have high freshwater biodiversity. As these results suggest a link between biodiversity and stable, high-yielding fisheries, an important win-win outcome may be possible for food security and conservation of freshwater ecosystems. However, findings also highlight the urgent need for more data to fully understand and monitor the contribution of

  4. Factors Affecting Firm Yield and the Estimation of Firm Yield for Selected Streamflow-Dominated Drinking-Water-Supply Reservoirs in Massachusetts

    USGS Publications Warehouse

    Waldron, Marcus C.; Archfield, Stacey A.

    2006-01-01

    Factors affecting reservoir firm yield, as determined by application of the Massachusetts Department of Environmental Protection's Firm Yield Estimator (FYE) model, were evaluated, modified, and tested on 46 streamflow-dominated reservoirs representing 15 Massachusetts drinking-water supplies. The model uses a mass-balance approach to determine the maximum average daily withdrawal rate that can be sustained during a period of record that includes the 1960s drought-of-record. The FYE methodology to estimate streamflow to the reservoir at an ungaged site was tested by simulating streamflow at two streamflow-gaging stations in Massachusetts and comparing the simulated streamflow to the observed streamflow. In general, the FYE-simulated flows agreed well with observed flows. There were substantial deviations from the measured values for extreme high and low flows. A sensitivity analysis determined that the model's streamflow estimates are most sensitive to input values for average annual precipitation, reservoir drainage area, and the soil-retention number-a term that describes the amount of precipitation retained by the soil in the basin. The FYE model currently provides the option of using a 1,000-year synthetic record constructed by randomly sampling 2-year blocks of concurrent streamflow and precipitation records 500 times; however, the synthetic record has the potential to generate records of precipitation and streamflow that do not reflect the worst historical drought in Massachusetts. For reservoirs that do not have periods of drawdown greater than 2 years, the bootstrap does not offer any additional information about the firm yield of a reservoir than the historical record does. For some reservoirs, the use of a synthetic record to determine firm yield resulted in as much as a 30-percent difference between firm-yield values from one simulation to the next. Furthermore, the assumption that the synthetic traces of streamflow are statistically equivalent to the

  5. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation for potential space project applications of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material requires an in-depth understanding of the MLCCs reliability. A general reliability model for Ni-BaTiO3 MLCCs is developed and discussed in this paper. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitors reliability life responds to external stresses; and an empirical function that defines the contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  6. Reviewing Reliability and Validity of Information for University Educational Evaluation

    NASA Astrophysics Data System (ADS)

    Otsuka, Yusaku

    To better utilize evaluations in higher education, it is necessary to share the methods of reviewing reliability and validity of examination scores and grades, and to accumulate and share data for confirming results. Before the GPA system is first introduced into a university or college, the reliability of examination scores and grades, especially for essay examinations, must be assured. Validity is a complicated concept, so should be assured in various ways, including using professional audits, theoretical models, and statistical data analysis. Because individual students and teachers are continually improving, using evaluations to appraise their progress is not always compatible with using evaluations in appraising the implementation of accountability in various departments or the university overall. To better utilize evaluations and improve higher education, evaluations should be integrated into the current system by sharing the vision of an academic learning community and promoting interaction between students and teachers based on sufficiently reliable and validated evaluation tools.

  7. Reliability and responsiveness of measures of pain in people with osteoarthritis of the knee: a psychometric evaluation

    PubMed Central

    Turner, Katie V.; Moreton, Bryan M.; Walsh, David A.; Lincoln, Nadina B.

    2017-01-01

    Abstract Purpose: To examine the fit between data from the Short Form McGill Pain Questionnaire (SF-MPQ-2) and the Rasch model, and to explore the reliability and internal responsiveness of measures of pain in people with knee osteoarthritis. Methods: Participants with knee osteoarthritis completed the SF-MPQ-2, Intermittent and Constant Osteoarthritis Pain questionnaire (ICOAP) and painDETECT. Participants were sent the same questionnaires 3 and 6 months later. Results: Fit to the Rasch model was not achieved for the SF-MPQ-2 Total scale. The Continuous subscale yielded adequate fit statistics after splitting item 10 on uniform DIF for gender, and removing item 9. The Intermittent subscale fit the Rasch model after rescoring items. The Neuropathic subscale had relatively good fit to the model. Test–retest reliability was satisfactory for most scales using both original and Rasch scoring ranging from fair to substantial. Effect sizes ranged from 0.13 to 1.79 indicating good internal responsiveness for most scales. Conclusions: These findings support the use of ICOAP subscales as reliable and responsive measure of pain in people with knee osteoarthritis. The MPQ-SF-2 subscales found to be acceptable alternatives. Implications for RehabilitationThe McGill Pain Questionnaire short version 2 is not a unidimensional scale in people with knee osteoarthritis, whereas three of the subscales are unidimensional.The McGill Pain Questionnaire short version 2 Affective subscale does not have good measurement properties for people with knee osteoarthritis.The McGill Pain Questionnaire short version 2 and the Intermittent and Constant Osteoarthritis Pain scales can be used to assess change over time.The painDETECT performs better as a screening measure than as an outcome measure. PMID:27027698

  8. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  9. Slope Controls Grain Yield and Climatic Yield in Mountainous Yunnan province, China

    NASA Astrophysics Data System (ADS)

    Duan, X.; Rong, L.; Gu, Z.; Feng, D.

    2017-12-01

    Mountainous regions are increasingly vulnerable to food insecurity because of limited arable land, growing population pressure, and climate change. Development of sustainable mountain agriculture will require an increased understanding of the effects of environmental factors on grain and climatic yields. The objective of this study was to explore the relationships between actual grain yield, climatic yield, and environmental factors in a mountainous region in China. We collected data on the average grain yield per unit area in 119 counties in Yunnan province from 1985 to 2012, and chose 17 environmental factors for the same period. Our results showed that actual grain yield ranged from 1.43 to 6.92 t·ha-1, and the climatic yield ranged from -0.15 to -0.01 t·ha-1. Lower climatic yield but higher grain yield was generally found in central areas and at lower slopes and elevations in the western and southwestern counties of Yunnan province. Higher climatic yield but lower grain yield were found in northwestern parts of Yunnan province on steep slopes. Annual precipation and temperature had a weak influence on the climatic yield. Slope explained 44.62 and 26.29% of the variation in grain yield and climatic yield. The effects of topography on grain and climatic yields were greater than climatic factors. Slope was the most important environmental variable for the variability in climatic and grain yields in the mountainous Yunnan province due to the highly heterogeneous topographic conditions. Conversion of slopes to terraces in areas with higher climatic yields is an effective way to maintain grain production in response to climate variability. Additionally, soil amendments and soil and water conservation measures should be considered to maintain soil fertility and aid in sustainable development in central areas, and in counties at lower slopes and elevations in western and southwestern Yunnan province.

  10. Sex differences in discriminative power of volleyball game-related statistics.

    PubMed

    João, Paulo Vicente; Leite, Nuno; Mesquita, Isabel; Sampaio, Jaime

    2010-12-01

    To identify sex differences in volleyball game-related statistics, the game-related statistics of several World Championships in 2007 (N=132) were analyzed using the software VIS from the International Volleyball Federation. Discriminant analysis was used to identify the game-related statistics which better discriminated performances by sex. Analysis yielded an emphasis on fault serves (SC = -.40), shot spikes (SC = .40), and reception digs (SC = .31). Specific robust numbers represent that considerable variability was evident in the game-related statistics profile, as men's volleyball games were better associated with terminal actions (errors of service), and women's volleyball games were characterized by continuous actions (in defense and attack). These differences may be related to the anthropometric and physiological differences between women and men and their influence on performance profiles.

  11. Reliability of a rating procedure to monitor industry self-regulation codes governing alcohol advertising content.

    PubMed

    Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne

    2008-03-01

    The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.

  12. Apparent Yield Strength of Hot-Pressed SiCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daloz, William L; Wereszczak, Andrew A; Jadaan, Osama M.

    2008-01-01

    Apparent yield strengths (YApp) of four hot-pressed silicon carbides (SiC-B, SiC-N,SiC-HPN, and SiC-SC-1RN) were estimated using diamond spherical or Hertzian indentation. The von Mises and Tresca criteria were considered. The developed test method was robust, simple and quick to execute, and thusly enabled the acquisition of confident sampling statistics. The choice of indenter size, test method, and method of analysis are described. The compressive force necessary to initiate apparent yielding was identified postmortem using differential interference contrast (or Nomarski) imaging with an optical microscope. It was found that the YApp of SiC-HPN (14.0 GPa) was approximately 10% higher than themore » equivalently valued YApp of SiC-B, SiC-N, and SiC-SC-1RN. This discrimination in YApp shows that the use of this test method could be insightful because there were no differences among the average Knoop hardnesses of the four SiC grades.« less

  13. Pitfalls and important issues in testing reliability using intraclass correlation coefficients in orthopaedic research.

    PubMed

    Lee, Kyoung Min; Lee, Jaebong; Chung, Chin Youb; Ahn, Soyeon; Sung, Ki Hyuk; Kim, Tae Won; Lee, Hui Jong; Park, Moon Seok

    2012-06-01

    Intra-class correlation coefficients (ICCs) provide a statistical means of testing the reliability. However, their interpretation is not well documented in the orthopedic field. The purpose of this study was to investigate the use of ICCs in the orthopedic literature and to demonstrate pitfalls regarding their use. First, orthopedic articles that used ICCs were retrieved from the Pubmed database, and journal demography, ICC models and concurrent statistics used were evaluated. Second, reliability test was performed on three common physical examinations in cerebral palsy, namely, the Thomas test, the Staheli test, and popliteal angle measurement. Thirty patients were assessed by three orthopedic surgeons to explore the statistical methods testing reliability. Third, the factors affecting the ICC values were examined by simulating the data sets based on the physical examination data where the ranges, slopes, and interobserver variability were modified. Of the 92 orthopedic articles identified, 58 articles (63%) did not clarify the ICC model used, and only 5 articles (5%) described all models, types, and measures. In reliability testing, although the popliteal angle showed a larger mean absolute difference than the Thomas test and the Staheli test, the ICC of popliteal angle was higher, which was believed to be contrary to the context of measurement. In addition, the ICC values were affected by the model, type, and measures used. In simulated data sets, the ICC showed higher values when the range of data sets were larger, the slopes of the data sets were parallel, and the interobserver variability was smaller. Care should be taken when interpreting the absolute ICC values, i.e., a higher ICC does not necessarily mean less variability because the ICC values can also be affected by various factors. The authors recommend that researchers clarify ICC models used and ICC values are interpreted in the context of measurement.

  14. Fatigue criterion to system design, life and reliability

    NASA Technical Reports Server (NTRS)

    Zaretsky, E. V.

    1985-01-01

    A generalized methodology to structural life prediction, design, and reliability based upon a fatigue criterion is advanced. The life prediction methodology is based in part on work of W. Weibull and G. Lundberg and A. Palmgren. The approach incorporates the computed life of elemental stress volumes of a complex machine element to predict system life. The results of coupon fatigue testing can be incorporated into the analysis allowing for life prediction and component or structural renewal rates with reasonable statistical certainty.

  15. Fracture mechanics concepts in reliability analysis of monolithic ceramics

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.; Gyekenyesi, John P.

    1987-01-01

    Basic design concepts for high-performance, monolithic ceramic structural components are addressed. The design of brittle ceramics differs from that of ductile metals because of the inability of ceramic materials to redistribute high local stresses caused by inherent flaws. Random flaw size and orientation requires that a probabilistic analysis be performed in order to determine component reliability. The current trend in probabilistic analysis is to combine linear elastic fracture mechanics concepts with the two parameter Weibull distribution function to predict component reliability under multiaxial stress states. Nondestructive evaluation supports this analytical effort by supplying data during verification testing. It can also help to determine statistical parameters which describe the material strength variation, in particular the material threshold strength (the third Weibull parameter), which in the past was often taken as zero for simplicity.

  16. Statistical methods for quantitative mass spectrometry proteomic experiments with labeling.

    PubMed

    Oberg, Ann L; Mahoney, Douglas W

    2012-01-01

    Mass Spectrometry utilizing labeling allows multiple specimens to be subjected to mass spectrometry simultaneously. As a result, between-experiment variability is reduced. Here we describe use of fundamental concepts of statistical experimental design in the labeling framework in order to minimize variability and avoid biases. We demonstrate how to export data in the format that is most efficient for statistical analysis. We demonstrate how to assess the need for normalization, perform normalization, and check whether it worked. We describe how to build a model explaining the observed values and test for differential protein abundance along with descriptive statistics and measures of reliability of the findings. Concepts are illustrated through the use of three case studies utilizing the iTRAQ 4-plex labeling protocol.

  17. Development of a nanosatellite de-orbiting system by reliability based design optimization

    NASA Astrophysics Data System (ADS)

    Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem

    2015-12-01

    This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.

  18. The effect of general anesthesia versus intravenous sedation on diagnostic yield and success in electromagnetic navigation bronchoscopy.

    PubMed

    Bowling, Mark R; Kohan, Matthew W; Walker, Paul; Efird, Jimmy; Ben Or, Sharon

    2015-01-01

    Navigational bronchoscopy is utilized to guide biopsies of peripheral lung nodules and place fiducial markers for treatment of limited stage lung cancer with stereotactic body radiotherapy. The type of sedation used for this procedure remains controversial. We performed a retrospective chart review to evaluate the differences of diagnostic yield and overall success of the procedure based on anesthesia type. Electromagnetic navigational bronchoscopy was performed using the superDimension software system. Once the targeted lesion was within reach, multiple tissue samples were obtained. Statistical analysis was used to correlate the yield with the type of sedation among other factors. A successful procedure was defined if a diagnosis was made or a fiducial marker was adequately placed. Navigational bronchoscopy was performed on a total of 120 targeted lesions. The overall complication rate of the procedure was 4.1%. The diagnostic yield and success of the procedure was 74% and 87%, respectively. Duration of the procedure was the only significant difference between the general anesthesia and IV sedation groups (mean, 58 vs. 43 min, P=0.0005). A larger tumor size was associated with a higher diagnostic yield (P=0.032). All other variables in terms of effect on diagnostic yield and an unsuccessful procedure did not meet statistical significance. Navigational bronchoscopy is a safe and effective pulmonary diagnostic tool with relatively low complication rate. The diagnostic yield and overall success of the procedure does not seem to be affected by the type of sedation used.

  19. Effects of starter nitrogen fertilizer on soybean root activity, leaf photosynthesis and grain yield

    PubMed Central

    Gai, Zhijia; Zhang, Jingtao; Li, Caifeng

    2017-01-01

    The objective of this study was to examine the impact of starter nitrogen fertilizer on soybean root activity, leaf photosynthesis, grain yield and their relationship. To achieve this objective, field experiments were conducted in 2013 and 2014, using a randomized complete block design, with three replications. Nitrogen was applied at planting at rates of 0, 25, 50, and 75 kg N ha-1. In both years, starter nitrogen fertilizer benefited root activity, leaf photosynthesis, and consequently its yield. Statistically significant correlation was found among root activity, leaf photosynthetic rate, and grain yield at the developmental stage. The application of N25, N50, and N75 increased grain yield by 1.28%, 2.47%, and 1.58% in 2013 and by 0.62%, 2.77%, and 2.06% in 2014 compared to the N0 treatment. Maximum grain yield of 3238.91 kg ha-1 in 2013 and 3086.87 kg ha-1 in 2014 were recorded for N50 treatment. Grain yield was greater for 2013 than 2014, possibly due to more favorable environmental conditions. This research indicated that applying nitrogen as starter is necessary to increase soybean yield in Sangjiang River Plain in China. PMID:28388620

  20. The applications of statistical quantification techniques in nanomechanics and nanoelectronics.

    PubMed

    Mai, Wenjie; Deng, Xinwei

    2010-10-08

    Although nanoscience and nanotechnology have been developing for approximately two decades and have achieved numerous breakthroughs, the experimental results from nanomaterials with a higher noise level and poorer repeatability than those from bulk materials still remain as a practical issue, and challenge many techniques of quantification of nanomaterials. This work proposes a physical-statistical modeling approach and a global fitting statistical method to use all the available discrete data or quasi-continuous curves to quantify a few targeted physical parameters, which can provide more accurate, efficient and reliable parameter estimates, and give reasonable physical explanations. In the resonance method for measuring the elastic modulus of ZnO nanowires (Zhou et al 2006 Solid State Commun. 139 222-6), our statistical technique gives E = 128.33 GPa instead of the original E = 108 GPa, and unveils a negative bias adjustment f(0). The causes are suggested by the systematic bias in measuring the length of the nanowires. In the electronic measurement of the resistivity of a Mo nanowire (Zach et al 2000 Science 290 2120-3), the proposed new method automatically identified the importance of accounting for the Ohmic contact resistance in the model of the Ohmic behavior in nanoelectronics experiments. The 95% confidence interval of resistivity in the proposed one-step procedure is determined to be 3.57 +/- 0.0274 x 10( - 5) ohm cm, which should be a more reliable and precise estimate. The statistical quantification technique should find wide applications in obtaining better estimations from various systematic errors and biased effects that become more significant at the nanoscale.

  1. Statistically Controlling for Confounding Constructs Is Harder than You Think

    PubMed Central

    Westfall, Jacob; Yarkoni, Tal

    2016-01-01

    Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707

  2. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in

  3. Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Suhwan; Kim, Min-Cheol; Sim, Eunji

    2017-05-01

    All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex

  4. The reliability-quality relationship for quality systems and quality risk management.

    PubMed

    Claycamp, H Gregg; Rahaman, Faiad; Urban, Jason M

    2012-01-01

    Engineering reliability typically refers to the probability that a system, or any of its components, will perform a required function for a stated period of time and under specified operating conditions. As such, reliability is inextricably linked with time-dependent quality concepts, such as maintaining a state of control and predicting the chances of losses from failures for quality risk management. Two popular current good manufacturing practice (cGMP) and quality risk management tools, failure mode and effects analysis (FMEA) and root cause analysis (RCA) are examples of engineering reliability evaluations that link reliability with quality and risk. Current concepts in pharmaceutical quality and quality management systems call for more predictive systems for maintaining quality; yet, the current pharmaceutical manufacturing literature and guidelines are curiously silent on engineering quality. This commentary discusses the meaning of engineering reliability while linking the concept to quality systems and quality risk management. The essay also discusses the difference between engineering reliability and statistical (assay) reliability. The assurance of quality in a pharmaceutical product is no longer measured only "after the fact" of manufacturing. Rather, concepts of quality systems and quality risk management call for designing quality assurance into all stages of the pharmaceutical product life cycle. Interestingly, most assays for quality are essentially static and inform product quality over the life cycle only by being repeated over time. Engineering process reliability is the fundamental concept that is meant to anticipate quality failures over the life cycle of the product. Reliability is a well-developed theory and practice for other types of manufactured products and manufacturing processes. Thus, it is well known to be an appropriate index of manufactured product quality. This essay discusses the meaning of reliability and its linkages with quality

  5. Simulated Impacts of Climate Change on Water Use and Yield of Irrigated Sugarcane in South Africa

    NASA Technical Reports Server (NTRS)

    Jones, M.R; Singels, A.; Ruane, A. C.

    2015-01-01

    Reliable predictions of climate change impacts on water use, irrigation requirements and yields of irrigated sugarcane in South Africa (a water-scarce country) are necessary to plan adaptation strategies. Although previous work has been done in this regard, methodologies and results vary considerably. The objectives were (1) to estimate likely impacts of climate change on sugarcane yields, water use and irrigation demand at three irrigated sugarcane production sites in South Africa (Malelane, Pongola and La Mercy) for current (1980-2010) and future (2070-2100) climate scenarios, using an approach based on the Agricultural Model Inter-comparison and Improvement Project (AgMIP) protocols; and (2) to assess the suitability of this methodology for investigating climate change impacts on sugarcane production. Future climate datasets were generated using the Delta downscaling method and three Global Circulation Models (GCMs) assuming atmospheric CO2 concentration [CO2] of 734 ppm(A2 emissions scenario). Yield and water use were simulated using the DSSAT-Canegro v4.5 model. Irrigated cane yields are expected to increase at all three sites (between 11 and 14%), primarily due to increased interception of radiation as a result of accelerated canopy development. Evapotranspiration and irrigation requirements increased by 11% due to increased canopy cover and evaporative demand. Sucrose yields are expected to decline because of increased consumption of photo-assimilate for structural growth and maintenance respiration. Crop responses in canopy development and yield formation differed markedly between the crop cycles investigated. Possible agronomic implications of these results include reduced weed control costs due to shortened periods of partial canopy, a need for improved efficiency of irrigation to counter increased demands, and adjustments to ripening and harvest practices to counter decreased cane quality and optimize productivity. Although the Delta climate data

  6. Statistical and Economic Techniques for Site-specific Nematode Management.

    PubMed

    Liu, Zheng; Griffin, Terry; Kirkpatrick, Terrence L

    2014-03-01

    Recent advances in precision agriculture technologies and spatial statistics allow realistic, site-specific estimation of nematode damage to field crops and provide a platform for the site-specific delivery of nematicides within individual fields. This paper reviews the spatial statistical techniques that model correlations among neighboring observations and develop a spatial economic analysis to determine the potential of site-specific nematicide application. The spatial econometric methodology applied in the context of site-specific crop yield response contributes to closing the gap between data analysis and realistic site-specific nematicide recommendations and helps to provide a practical method of site-specifically controlling nematodes.

  7. Yield Estimation for Semipalatinsk Underground Nuclear Explosions Using Seismic Surface-wave Observations at Near-regional Distances

    NASA Astrophysics Data System (ADS)

    Adushkin, V. V.

    - A statistical procedure is described for estimating the yields of underground nuclear tests at the former Soviet Semipalatinsk test site using the peak amplitudes of short-period surface waves observed at near-regional distances (Δ < 150 km) from these explosions. This methodology is then applied to data recorded from a large sample of the Semipalatinsk explosions, including the Soviet JVE explosion of September 14, 1988, and it is demonstrated that it provides seismic estimates of explosion yield which are typically within 20% of the yields determined for these same explosions using more accurate, non-seismic techniques based on near-source observations.

  8. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  9. Validity and reliability of CHOICE Health Experience Questionnaire: Thai version.

    PubMed

    Aiyasanon, Nipa; Premasathian, Nalinee; Nimmannit, Akarin; Jetanavanich, Pantip; Sritippayawan, Suchai

    2009-09-01

    Assess the reliability and validity of the Thai translation of the CHOICE Health Experience Questionnaire (CHEQ), which is the English-language questionnaire, developed specifically for End-stage-renal disease (ESRD) patients. The CHEQ comprised of two parts, nine general domains of SF-36 (physical function, role-physical, bodily pain, mental health, role-emotional, social function, vitality, general health, and report transition) and 16 dialysis specific domains of the CHEQ (role-physical, mental health, general health, freedom, travel restriction, cognitive function, financial function, restriction diet and fluids, recreation, work, body image, symptoms, sex, sleep, access, and quality of life). The authors translated the CHEQ questionnaire into Thai and confirmed the accuracy by back translation. Pilot study sample was 10 Thai ESRD patients. Then the CHEQ (Thai) was applied to 110 Thai ESRD patients. Twenty-three patients had chronic peritoneal dialysis patients and 87 were chronic intermittent hemodialysis patients. Statistical analysis included descriptive statistics, Mann-Whitney U test, Student's t-test, and Cronbach's alpha. Construct validity was satisfactory with the significant difference less than 0.001 between the low and high group. The reliability coefficient for the Cronbach's alpha of the total scale of the CHEQ (Thai) was 0.98. The Cronbach 's alphas were greater than 0.7 for all domains, range from 0.58 to 0.92, except the social function and quality of life domain (alpha = 0.66 and 0.575). The CHEQ (Thai) is reliable and valid for assessment of Thai ESRD patients receiving chronic dialysis. Its properties are similar to those reported in the original version.

  10. Do downscaled general circulation models reliably simulate historical climatic conditions?

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  11. The development of a quality appraisal tool for studies of diagnostic reliability (QAREL).

    PubMed

    Lucas, Nicholas P; Macaskill, Petra; Irwig, Les; Bogduk, Nikolai

    2010-08-01

    In systematic reviews of the reliability of diagnostic tests, no quality assessment tool has been used consistently. The aim of this study was to develop a specific quality appraisal tool for studies of diagnostic reliability. Key principles for the quality of studies of diagnostic reliability were identified with reference to epidemiologic principles, existing quality appraisal checklists, and the Standards for Reporting of Diagnostic Accuracy (STARD) and Quality Assessment of Diagnostic Accuracy Studies (QUADAS) resources. Specific items that encompassed each of the principles were developed. Experts in diagnostic research provided feedback on the items that were to form the appraisal tool. This process was iterative and continued until consensus among experts was reached. The Quality Appraisal of Reliability Studies (QAREL) checklist includes 11 items that explore seven principles. Items cover the spectrum of subjects, spectrum of examiners, examiner blinding, order effects of examination, suitability of the time interval among repeated measurements, appropriate test application and interpretation, and appropriate statistical analysis. QAREL has been developed as a specific quality appraisal tool for studies of diagnostic reliability. The reliability of this tool in different contexts needs to be evaluated. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  12. Evidence for B{yields}K{eta}'{gamma} decays at Belle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wedd, R.; Barberio, E.; Limosani, A.

    2010-06-01

    We present the results of a search for the radiative decay B{yields}K{eta}{sup '{gamma}} and find evidence for B{sup +{yields}}K{sup +{eta}'{gamma}} decays at the 3.3 standard deviation level with a partial branching fraction of (3.6{+-}1.2{+-}0.4)x10{sup -6}, where the first error is statistical and the second systematic. This measurement is restricted to the region of combined K{eta}{sup '} invariant mass less than 3.4 GeV/c{sup 2}. A 90% confidence level upper limit of 6.4x10{sup -6} is obtained for the partial branching fraction of the decay B{sup 0{yields}}K{sup 0{eta}'{gamma}} in the same K{eta}{sup '} invariant mass region. These results are obtained from a 605more » fb{sup -1} data sample containing 657x10{sup 6}BB pairs collected at the {Upsilon}(4S) resonance with the Belle detector at the KEKB asymmetric-energy e{sup +}e{sup -} collider.« less

  13. Increasing Crop Yields in Water Stressed Countries by Combining Operations of Freshwater Reservoir and Wastewater Reclamation Plant

    NASA Astrophysics Data System (ADS)

    Bhushan, R.; Ng, T. L.

    2015-12-01

    Freshwater resources around the world are increasing in scarcity due to population growth, industrialization and climate change. This is a serious concern for water stressed countries, including those in Asia and North Africa where future food production is expected to be negatively affected by this. To address this problem, we investigate the potential of combining freshwater reservoir and wastewater reclamation operations. Reservoir water is the cheaper source of irrigation, but is often limited and climate sensitive. Treated wastewater is a more reliable alternative for irrigation, but often requires extensive further treatment which can be expensive. We propose combining the operations of a reservoir and a wastewater reclamation plant (WWRP) to augment the supply from the reservoir with reclaimed water for increasing crop yields in water stressed regions. The joint system of reservoir and WWRP is modeled as a multi-objective optimization problem with the double objective of maximizing the crop yield and minimizing total cost, subject to constraints on reservoir storage, spill and release, and capacity of the WWRP. We use the crop growth model Aquacrop, supported by The Food and Agriculture Organization of the United Nations (FAO), to model crop growth in response to water use. Aquacrop considers the effects of water deficit on crop growth stages, and from there estimates crop yield. We generate results comparing total crop yield under irrigation with water from just the reservoir (which is limited and often interrupted), and yield with water from the joint system (which has the potential of higher supply and greater reliability). We will present results for locations in India and Africa to evaluate the potential of the joint operations for improving food security in those areas for different budgets.

  14. Performance of intraclass correlation coefficient (ICC) as a reliability index under various distributions in scale reliability studies.

    PubMed

    Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary

    2018-04-29

    Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Statistics on Blindness in the Model Reporting Area 1969-1970.

    ERIC Educational Resources Information Center

    Kahn, Harold A.; Moorhead, Helen B.

    Presented in the form of 30 tables are statistics on blindness in 16 states which have agreed to uniform definitions and procedures to improve reliability of data regarding blind persons. The data indicates that rates of blindness were generally higher for nonwhites than for whites with the ratio ranging from almost 10 for glaucoma to minimal for…

  16. CRAX/Cassandra Reliability Analysis Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.

    1999-02-10

    Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less

  17. Recent Reliability Reporting Practices in "Psychological Assessment": Recognizing the People behind the Data

    ERIC Educational Resources Information Center

    Green, Carlton E.; Chen, Cynthia E.; Helms, Janet E.; Henze, Kevin T.

    2011-01-01

    Helms, Henze, Sass, and Mifsud (2006) defined good practices for internal consistency reporting, interpretation, and analysis consistent with an alpha-as-data perspective. Their viewpoint (a) expands on previous arguments that reliability coefficients are group-level summary statistics of samples' responses rather than stable properties of scales…

  18. Statistical optimization for enhanced yields of probiotic Bacillus coagulans and its phage resistant mutants followed by kinetic modelling of the process.

    PubMed

    Pandey, Kavita R; Joshi, Chetan; Vakil, Babu V

    2016-01-01

    Probiotics are microorganisms which when administered in adequate amounts confer health benefits to the host. A leading pharmaceutical company producing Bacillus coagulans as a probiotic was facing the problem of recurring phage attacks. Two mutants viz. B. co PIII and B. co MIII that were isolated as phage resistant mutants after UV irradiation and MMS treatment of phage sensitive B. coagulans parental culture were characterized at functional and molecular level and were noted to have undergone interesting genetic changes. The non-specific genetic alterations induced by mutagenesis can also lead to alterations in cell performance. Hence, in the current study the parental strain and the two mutants were selected for shake flask optimization. Plackett-Burman design was used to select the significant culture variables affecting biomass production. Evolutionary operation method was applied for further optimization. The study showed wide variations in the nutritional requirements of phage resistant mutants, post exposure to mutagens. An increment of 150, 134 and 152 % was observed in the biomass productions of B. coagulans (parental type) and mutants B.co PIII and B.co MIII respectively, compared to the yield from one-factor-at-a-time technique. Using Logistic and modified Leudeking-Piret equations, biomass accumulation and substrate utilization efficiency of the bioprocess were determined. The experimental data was in agreement with the results predicted by statistical analysis and modelling. The developed model may be useful for controlling the growth and substrate consumption kinetics in large scale fermentation using B. coagulans .

  19. Statistical studies of animal response data from USF toxicity screening test method

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Machado, A. M.

    1978-01-01

    Statistical examination of animal response data obtained using Procedure B of the USF toxicity screening test method indicates that the data deviate only slightly from a normal or Gaussian distribution. This slight departure from normality is not expected to invalidate conclusions based on theoretical statistics. Comparison of times to staggering, convulsions, collapse, and death as endpoints shows that time to death appears to be the most reliable endpoint because it offers the lowest probability of missed observations and premature judgements.

  20. Incorporating Nonparametric Statistics into Delphi Studies in Library and Information Science

    ERIC Educational Resources Information Center

    Ju, Boryung; Jin, Tao

    2013-01-01

    Introduction: The Delphi technique is widely used in library and information science research. However, many researchers in the field fail to employ standard statistical tests when using this technique. This makes the technique vulnerable to criticisms of its reliability and validity. The general goal of this article is to explore how…