Sample records for scale factor error

  1. Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei

    2013-08-01

    Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.

  2. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  3. Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng

    2015-08-15

    This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less

  4. SAMPLING DISTRIBUTIONS OF ERROR IN MULTIDIMENSIONAL SCALING.

    ERIC Educational Resources Information Center

    STAKE, ROBERT E.; AND OTHERS

    AN EMPIRICAL STUDY WAS MADE OF THE ERROR FACTORS IN MULTIDIMENSIONAL SCALING (MDS) TO REFINE THE USE OF MDS FOR MORE EXPERT MANIPULATION OF SCALES USED IN EDUCATIONAL MEASUREMENT. THE PURPOSE OF THE RESEARCH WAS TO GENERATE TABLES OF THE SAMPLING DISTRIBUTIONS THAT ARE NECESSARY FOR DISCRIMINATING BETWEEN ERROR AND NONERROR MDS DIMENSIONS. THE…

  5. Errors introduced by dose scaling for relative dosimetry

    PubMed Central

    Watanabe, Yoichi; Hayashi, Naoki

    2012-01-01

    Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658

  6. Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  7. Experience gained in testing a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  8. [Confirmatory factor analysis of the short French version of the Center for Epidemiological Studies of Depression Scale (CES-D10) in adolescents].

    PubMed

    Cartierre, N; Coulon, N; Demerval, R

    2011-09-01

    Screening depressivity among adolescents is a key public health priority. In order to measure the severity of depressive symptomatology, a four-dimensional 20 items scale called "Center for Epidemiological Studies-Depression Scale" (CES-D) was developed. A shorter 10-item version was developed and validated (Andresen et al.). For this brief version, several authors supported a two-factor structure - Negative and Positive affect - but the relationship between the two reversed-worded items of the Positive affect factor could be better accounted for by correlated errors. The aim of this study is triple: firstly to test a French version of the CES-D10 among adolescents; secondly to test the relevance of a one-dimensional structure by considering error correlation for Positive affect items; finally to examine the extent to which this structural model is invariant across gender. The sample was composed of 269 French middle school adolescents (139 girls and 130 boys, mean age: 13.8, SD=0.65). Confirmatory Factorial Analyses (CFA) using the LISREL 8.52 were conducted in order to assess the adjustment to the data of three factor models: a one-factor model, a two-factor model (Positive and Negative affect) and a one-factor model with specification of correlated errors between the two reverse-worded items. Then, multigroup analysis was conducted to test the scale invariance for girls and boys. Internal consistency of the CES-D10 was satisfying for the adolescent sample (α=0.75). The best fitting model is the one-factor model with correlated errors between the two items of the previous Positive affect factor (χ(2)/dl=2.50; GFI=0.939; CFI=0.894; RMSEA=0.076). This model presented a better statistical fit to the data than the one-factor model without error correlation: χ(2)(diff) (1)=22.14, p<0.001. Then, the one-factor model with correlated errors was analyzed across separate samples of girls and boys. The model explains the data somewhat better for boys than for girls. The model's overall χ(2)(68) without equality constraints from the multigroup analysis was 107.98. The χ(2)(89) statistic for the model with equality-constrained factor loadings was 121.31. The change in the overall Chi(2) is not statistically significant. This result implies that the model is, therefore, invariant across gender. The mean scores were higher for girls than boys: 9.69 versus 7.19; t(267)=4.13, p<0.001. To conclude, and waiting for further research using the French version of the CES-D10 for adolescents, it appears that this short scale is generally acceptable and can be a useful tool for both research and practice. The scale invariance across gender has been demonstrated but the invariance across age must be tested too. Copyright © 2011 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  9. Identification of factors which affect the tendency towards and attitudes of emergency unit nurses to make medical errors.

    PubMed

    Kiymaz, Dilek; Koç, Zeliha

    2018-03-01

    To determine individual and professional factors affecting the tendency of emergency unit nurses to make medical errors and their attitudes towards these errors in Turkey. Compared with other units, the emergency unit is an environment where there is an increased tendency for making medical errors due to its intensive and rapid pace, noise and complex and dynamic structure. A descriptive cross-sectional study. The study was carried out from 25 July 2014-16 September 2015 with the participation of 284 nurses who volunteered to take part in the study. Data were gathered using the data collection survey for nurses, the Medical Error Tendency Scale and the Medical Error Attitude Scale. It was determined that 40.1% of the nurses previously witnessed medical errors, 19.4% made a medical error in the last year, 17.6% of medical errors were caused by medication errors where the wrong medication was administered in the wrong dose, and none of the nurses filled out a case report form about the medical errors they made. Regarding the factors that caused medical errors in the emergency unit, 91.2% of the nurses stated excessive workload as a cause; 85.1% stated an insufficient number of nurses; and 75.4% stated fatigue, exhaustion and burnout. The study showed that nurses who loved their job were satisfied with their unit and who always worked during day shifts had a lower medical error tendency. It is suggested to consider the following actions: increase awareness about medical errors, organise training to reduce errors in medication administration, develop procedures and protocols specific to the emergency unit health care and create an environment which is not punitive wherein nurses can safely report medical errors. © 2017 John Wiley & Sons Ltd.

  10. A quality assessment of 3D video analysis for full scale rockfall experiments

    NASA Astrophysics Data System (ADS)

    Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.

    2012-04-01

    Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results regarding the lateral rough positioning along the slope, the frontal video must also be scaled. The error in scaling the video images can be evaluated by comparing the data by additional combination of the vertical trajectory component over time with the theoretical polynomial trend according to gravity. The different tracking techniques used to plot the position of the boulder's center of gravity all generated positional data with minimal error acceptable for trajectory analysis. However, when calculating instantaneous velocities an amplification of this error becomes un acceptable. A regression analysis of the data is helpful to optimize trajectory and velocity, respectively.

  11. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  12. Short version of the Depression Anxiety Stress Scale-21: is it valid for Brazilian adolescents?

    PubMed Central

    da Silva, Hítalo Andrade; dos Passos, Muana Hiandra Pereira; de Oliveira, Valéria Mayaly Alves; Palmeira, Aline Cabral; Pitangui, Ana Carolina Rodarti; de Araújo, Rodrigo Cappato

    2016-01-01

    ABSTRACT Objective To evaluate the interday reproducibility, agreement and validity of the construct of short version of the Depression Anxiety Stress Scale-21 applied to adolescents. Methods The sample consisted of adolescents of both sexes, aged between 10 and 19 years, who were recruited from schools and sports centers. The validity of the construct was performed by exploratory factor analysis, and reliability was calculated for each construct using the intraclass correlation coefficient, standard error of measurement and the minimum detectable change. Results The factor analysis combining the items corresponding to anxiety and stress in a single factor, and depression in a second factor, showed a better match of all 21 items, with higher factor loadings in their respective constructs. The reproducibility values for depression were intraclass correlation coefficient with 0.86, standard error of measurement with 0.80, and minimum detectable change with 2.22; and, for anxiety/stress: intraclass correlation coefficient with 0.82, standard error of measurement with 1.80, and minimum detectable change with 4.99. Conclusion The short version of the Depression Anxiety Stress Scale-21 showed excellent values of reliability, and strong internal consistency. The two-factor model with condensation of the constructs anxiety and stress in a single factor was the most acceptable for the adolescent population. PMID:28076595

  13. Continuous quantum error correction for non-Markovian decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089

    2007-08-15

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less

  14. Does Wechsler Intelligence Scale administration and scoring proficiency improve during assessment training?

    PubMed

    Platt, Tyson L; Zachar, Peter; Ray, Glen E; Lobello, Steven G; Underhill, Andrea T

    2007-04-01

    Studies have found that Wechsler scale administration and scoring proficiency is not easily attained during graduate training. These findings may be related to methodological issues. Using a single-group repeated measures design, this study documents statistically significant, though modest, error reduction on the WAIS-III and WISC-III during a graduate course in assessment. The study design does not permit the isolation of training factors related to error reduction, or assessment of whether error reduction is a function of mere practice. However, the results do indicate that previous study findings of no or inconsistent improvement in scoring proficiency may have been the result of methodological factors. Implications for teaching individual intelligence testing and further research are discussed.

  15. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  16. Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.

    PubMed

    Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan

    2018-05-21

    This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.

  17. Investigation the gas film in micro scale induced error on the performance of the aerostatic spindle in ultra-precision machining

    NASA Astrophysics Data System (ADS)

    Chen, Dongju; Huo, Chen; Cui, Xianxian; Pan, Ri; Fan, Jinwei; An, Chenhui

    2018-05-01

    The objective of this work is to study the influence of error induced by gas film in micro-scale on the static and dynamic behavior of a shaft supported by the aerostatic bearings. The static and dynamic balance models of the aerostatic bearing are presented by the calculated stiffness and damping in micro scale. The static simulation shows that the deformation of aerostatic spindle system in micro scale is decreased. For the dynamic behavior, both the stiffness and damping in axial and radial directions are increased in micro scale. The experiments of the stiffness and rotation error of the spindle show that the deflection of the shaft resulting from the calculating parameters in the micro scale is very close to the deviation of the spindle system. The frequency information in transient analysis is similar to the actual test, and they are also higher than the results from the traditional case without considering micro factor. Therefore, it can be concluded that the value considering micro factor is closer to the actual work case of the aerostatic spindle system. These can provide theoretical basis for the design and machining process of machine tools.

  18. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    PubMed

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  19. Measurement properties of the WOMAC LK 3.1 pain scale.

    PubMed

    Stratford, P W; Kennedy, D M; Woodhouse, L J; Spadoni, G F

    2007-03-01

    The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) is applied extensively to patients with osteoarthritis of the hip or knee. Previous work has challenged the validity of its physical function scale however an extensive evaluation of its pain scale has not been reported. Our purpose was to estimate internal consistency, factorial validity, test-retest reliability, and the standard error of measurement (SEM) of the WOMAC LK 3.1 pain scale. Four hundred and seventy-four patients with osteoarthritis of the hip or knee awaiting arthroplasty were administered the WOMAC. Estimates of internal consistency (coefficient alpha), factorial validity (confirmatory factor analysis), and the SEM based on internal consistency (SEM(IC)) were obtained. Test-retest reliability [Type 2,1 intraclass correlation coefficients (ICC)] and a corresponding SEM(TRT) were estimated on a subsample of 36 patients. Our estimates were: internal consistency alpha=0.84; SEM(IC)=1.48; Type 2,1 ICC=0.77; SEM(TRT)=1.69. Confirmatory factor analysis failed to support a single factor structure of the pain scale with uncorrelated error terms. Two comparable models provided excellent fit: (1) a model with correlated error terms between the walking and stairs items, and between night and sit items (chi2=0.18, P=0.98); (2) a two factor model with walking and stairs items loading on one factor, night and sit items loading on a second factor, and the standing item loading on both factors (chi2=0.18, P=0.98). Our examination of the factorial structure of the WOMAC pain scale failed to support a single factor and internal consistency analysis yielded a coefficient less than optimal for individual patient use. An alternate strategy to summing the five-item responses when considering individual patient application would be to interpret item responses separately or to sum only those items which display homogeneity.

  20. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  1. Short-term prediction of rain attenuation level and volatility in Earth-to-Satellite links at EHF band

    NASA Astrophysics Data System (ADS)

    de Montera, L.; Mallet, C.; Barthès, L.; Golé, P.

    2008-08-01

    This paper shows how nonlinear models originally developed in the finance field can be used to predict rain attenuation level and volatility in Earth-to-Satellite links operating at the Extremely High Frequencies band (EHF, 20 50 GHz). A common approach to solving this problem is to consider that the prediction error corresponds only to scintillations, whose variance is assumed to be constant. Nevertheless, this assumption does not seem to be realistic because of the heteroscedasticity of error time series: the variance of the prediction error is found to be time-varying and has to be modeled. Since rain attenuation time series behave similarly to certain stocks or foreign exchange rates, a switching ARIMA/GARCH model was implemented. The originality of this model is that not only the attenuation level, but also the error conditional distribution are predicted. It allows an accurate upper-bound of the future attenuation to be estimated in real time that minimizes the cost of Fade Mitigation Techniques (FMT) and therefore enables the communication system to reach a high percentage of availability. The performance of the switching ARIMA/GARCH model was estimated using a measurement database of the Olympus satellite 20/30 GHz beacons and this model is shown to outperform significantly other existing models. The model also includes frequency scaling from the downlink frequency to the uplink frequency. The attenuation effects (gases, clouds and rain) are first separated with a neural network and then scaled using specific scaling factors. As to the resulting uplink prediction error, the error contribution of the frequency scaling step is shown to be larger than that of the downlink prediction, indicating that further study should focus on improving the accuracy of the scaling factor.

  2. How allele frequency and study design affect association test statistics with misrepresentation errors.

    PubMed

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  3. Gravity gradient preprocessing at the GOCE HPF

    NASA Astrophysics Data System (ADS)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  4. Preprocessing of gravity gradients at the GOCE high-level processing facility

    NASA Astrophysics Data System (ADS)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  5. A self-calibration method in single-axis rotational inertial navigation system with rotating mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Yuanpei; Wang, Lingcao; Li, Kui

    2017-10-01

    Rotary inertial navigation modulation mechanism can greatly improve the inertial navigation system (INS) accuracy through the rotation. Based on the single-axis rotational inertial navigation system (RINS), a self-calibration method is put forward. The whole system is applied with the rotation modulation technique so that whole inertial measurement unit (IMU) of system can rotate around the motor shaft without any external input. In the process of modulation, some important errors can be decoupled. Coupled with the initial position information and attitude information of the system as the reference, the velocity errors and attitude errors in the rotation are used as measurement to perform Kalman filtering to estimate part of important errors of the system after which the errors can be compensated into the system. The simulation results show that the method can complete the self-calibration of the single-axis RINS in 15 minutes and estimate gyro drifts of three-axis, the installation error angle of the IMU and the scale factor error of the gyro on z-axis. The calibration accuracy of optic gyro drifts could be about 0.003°/h (1σ) as well as the scale factor error could be about 1 parts per million (1σ). The errors estimate reaches the system requirements which can effectively improve the longtime navigation accuracy of the vehicle or the boat.

  6. Effect of neoclassical toroidal viscosity on error-field penetration thresholds in tokamak plasmas.

    PubMed

    Cole, A J; Hegna, C C; Callen, J D

    2007-08-10

    A model for field-error penetration is developed that includes nonresonant as well as the usual resonant field-error effects. The nonresonant components cause a neoclassical toroidal viscous torque that keeps the plasma rotating at a rate comparable to the ion diamagnetic frequency. The new theory is used to examine resonant error-field penetration threshold scaling in Ohmic tokamak plasmas. Compared to previous theoretical results, we find the plasma is less susceptible to error-field penetration and locking, by a factor that depends on the nonresonant error-field amplitude.

  7. Symmetry boost of the fidelity of Shor factoring

    NASA Astrophysics Data System (ADS)

    Nam, Y. S.; Blümel, R.

    2018-05-01

    In Shor's algorithm quantum subroutines occur with the structure F U F-1 , where F is a unitary transform and U is performing a quantum computation. Examples are quantum adders and subunits of quantum modulo adders. In this paper we show, both analytically and numerically, that if, in analogy to spin echoes, F and F-1 can be implemented symmetrically when executing Shor's algorithm on actual, imperfect quantum hardware, such that F and F-1 have the same hardware errors, a symmetry boost in the fidelity of the combined F U F-1 quantum operation results when compared to the case in which the errors in F and F-1 are independently random. Running the complete gate-by-gate implemented Shor algorithm, we show that the symmetry-induced fidelity boost can be as large as a factor 4. While most of our analytical and numerical results concern the case of over- and under-rotation of controlled rotation gates, in the numerically accessible case of Shor's algorithm with a small number of qubits, we show explicitly that the symmetry boost is robust with respect to more general types of errors. While, expectedly, additional error types reduce the symmetry boost, we show explicitly, by implementing general off-diagonal SU (N ) errors (N =2 ,4 ,8 ), that the boost factor scales like a Lorentzian in δ /σ , where σ and δ are the error strengths of the diagonal over- and underrotation errors and the off-diagonal SU (N ) errors, respectively. The Lorentzian shape also shows that, while the boost factor may become small with increasing δ , it declines slowly (essentially like a power law) and is never completely erased. We also investigate the effect of diagonal nonunitary errors, which, in analogy to unitary errors, reduce but never erase the symmetry boost. Going beyond the case of small quantum processors, we present analytical scaling results that show that the symmetry boost persists in the practically interesting case of a large number of qubits. We illustrate this result explicitly for the case of Shor factoring of the semiprime RSA-1024, where, analytically, focusing on over- and underrotation errors, we obtain a boost factor of about 10. In addition, we provide a proof of the fidelity product formula, including its range of applicability.

  8. Innovative self-calibration method for accelerometer scale factor of the missile-borne RINS with fiber optic gyro.

    PubMed

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Zhang, Yiming

    2016-09-19

    The calibration of an inertial measurement unit (IMU) is a key technique to improve the preciseness of the inertial navigation system (INS) for missile, especially for the calibration of accelerometer scale factor. Traditional calibration method is generally based on the high accuracy turntable, however, it leads to expensive costs and the calibration results are not suitable to the actual operating environment. In the wake of developments in multi-axis rotational INS (RINS) with optical inertial sensors, self-calibration is utilized as an effective way to calibrate IMU on missile and the calibration results are more accurate in practical application. However, the introduction of multi-axis RINS causes additional calibration errors, including non-orthogonality errors of mechanical processing and non-horizontal errors of operating environment, it means that the multi-axis gimbals could not be regarded as a high accuracy turntable. As for its application on missiles, in this paper, after analyzing the relationship between the calibration error of accelerometer scale factor and non-orthogonality and non-horizontal angles, an innovative calibration procedure using the signals of fiber optic gyro and photoelectric encoder is proposed. The laboratory and vehicle experiment results validate the theory and prove that the proposed method relaxes the orthogonality requirement of rotation axes and eliminates the strict application condition of the system.

  9. Measurement equivalence of seven selected items of posttraumatic growth between black and white adult survivors of Hurricane Katrina.

    PubMed

    Rhodes, Alison M; Tran, Thanh V

    2013-02-01

    This study examined the equivalence or comparability of the measurement properties of seven selected items measuring posttraumatic growth among self-identified Black (n = 270) and White (n = 707) adult survivors of Hurricane Katrina, using data from the Baseline Survey of the Hurricane Katrina Community Advisory Group Study. Internal consistency reliability was equally good for both groups (Cronbach's alphas = .79), as were correlations between individual scale items and their respective overall scale. Confirmatory factor analysis of a congeneric measurement model of seven selected items of posttraumatic growth showed adequate measures of fit for both groups. The results showed only small variation in magnitude of factor loadings and measurement errors between the two samples. Tests of measurement invariance showed mixed results, but overall indicated that factor loading, error variance, and factor variance were similar between the two samples. These seven selected items can be useful for future large-scale surveys of posttraumatic growth.

  10. Improved motor control method with measurements of fiber optics gyro (FOG) for dual-axis rotational inertial navigation system (RINS).

    PubMed

    Song, Tianxiao; Wang, Xueyun; Liang, Wenwei; Xing, Li

    2018-05-14

    Benefiting from frame structure, RINS can improve the navigation accuracy by modulating the inertial sensor errors with proper rotation scheme. In the traditional motor control method, the measurements of the photoelectric encoder are always adopted to drive inertial measurement unit (IMU) to rotate. However, when carrier conducts heading motion, the inertial sensor errors may no longer be zero-mean in navigation coordinate. Meanwhile, some high-speed carriers like aircraft need to roll a certain angle to balance the centrifugal force during the heading motion, which may result in non-negligible coupling errors, caused by the FOG installation errors and scale factor errors. Moreover, the error parameters of FOG are susceptible to the temperature and magnetic field, and the pre-calibration is a time-consuming process which is difficult to completely suppress the FOG-related errors. In this paper, an improved motor control method with the measurements of FOG is proposed to address these problems, with which the outer frame can insulate the carrier's roll motion and the inner frame can simultaneously achieve the rotary modulation on the basis of insulating the heading motion. The results of turntable experiments indicate that the navigation performance of dual-axis RINS has been significantly improved over the traditional method, which could still be maintained even with large FOG installation errors and scale factor errors, proving that the proposed method can relax the requirements for the accuracy of FOG-related errors.

  11. Investigating the Latent Structure of the Teacher Efficacy Scale

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2013-01-01

    This article reevaluates the latent structure of the Teacher Efficacy Scale using confirmatory factor analyses (CFAs) on a sample of preservice teachers from a public university in the U.S. Southwest. The fit of a proposed two-factor CFA model with an error correlation structure consistent with internal/ external locus of control is compared to…

  12. Scale-model charge-transfer technique for measuring enhancement factors

    NASA Technical Reports Server (NTRS)

    Kositsky, J.; Nanevicz, J. E.

    1991-01-01

    Determination of aircraft electric field enhancement factors is crucial when using airborne field mill (ABFM) systems to accurately measure electric fields aloft. SRI used the scale model charge transfer technique to determine enhancement factors of several canonical shapes and a scale model Learjet 36A. The measured values for the canonical shapes agreed with known analytic solutions within about 6 percent. The laboratory determined enhancement factors for the aircraft were compared with those derived from in-flight data gathered by a Learjet 36A outfitted with eight field mills. The values agreed to within experimental error (approx. 15 percent).

  13. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  14. Using Multilevel Factor Analysis with Clustered Data: Investigating the Factor Structure of the Positive Values Scale

    ERIC Educational Resources Information Center

    Huang, Francis L.; Cornell, Dewey G.

    2016-01-01

    Advances in multilevel modeling techniques now make it possible to investigate the psychometric properties of instruments using clustered data. Factor models that overlook the clustering effect can lead to underestimated standard errors, incorrect parameter estimates, and model fit indices. In addition, factor structures may differ depending on…

  15. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  16. Characterizing Macro Scale Patterns Of Uncertainty For Improved Operational Flood Forecasting Over The Conterminous United States

    NASA Astrophysics Data System (ADS)

    Vergara, H. J.; Kirstetter, P.; Gourley, J. J.; Flamig, Z.; Hong, Y.

    2015-12-01

    The macro scale patterns of simulated streamflow errors are studied in order to characterize uncertainty in a hydrologic modeling system forced with the Multi-Radar/Multi-Sensor (MRMS; http://mrms.ou.edu) quantitative precipitation estimates for flood forecasting over the Conterminous United States (CONUS). The hydrologic model is centerpiece of the Flooded Locations And Simulated Hydrograph (FLASH; http://flash.ou.edu) real-time system. The hydrologic model is implemented at 1-km/5-min resolution to generate estimates of streamflow. Data from the CONUS-wide stream gauge network of the United States' Geological Survey (USGS) were used as a reference to evaluate the discrepancies with the hydrological model predictions. Streamflow errors were studied at the event scale with particular focus on the peak flow magnitude and timing. A total of 2,680 catchments over CONUS and 75,496 events from a 10-year period are used for the simulation diagnostic analysis. Associations between streamflow errors and geophysical factors were explored and modeled. It is found that hydro-climatic factors and radar coverage could explain significant underestimation of peak flow in regions of complex terrain. Furthermore, the statistical modeling of peak flow errors shows that other geophysical factors such as basin geomorphometry, pedology, and land cover/use could also provide explanatory information. Results from this research demonstrate the utility of uncertainty characterization in providing guidance to improve model adequacy, parameter estimates, and input quality control. Likewise, the characterization of uncertainty enables probabilistic flood forecasting that can be extended to ungauged locations.

  17. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  18. Satellite Sampling and Retrieval Errors in Regional Monthly Rain Estimates from TMI AMSR-E, SSM/I, AMSU-B and the TRMM PR

    NASA Technical Reports Server (NTRS)

    Fisher, Brad; Wolff, David B.

    2010-01-01

    Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.

  19. Psychometric assessment of a scale to measure bonding workplace social capital

    PubMed Central

    Tsutsumi, Akizumi; Inoue, Akiomi; Odagiri, Yuko

    2017-01-01

    Objectives Workplace social capital (WSC) has attracted increasing attention as an organizational and psychosocial factor related to worker health. This study aimed to assess the psychometric properties of a newly developed WSC scale for use in work environments, where bonding social capital is important. Methods We assessed the psychometric properties of a newly developed 6-item scale to measure bonding WSC using two data sources. Participants were 1,650 randomly selected workers who completed an online survey. Exploratory factor analyses were conducted. We examined the item–item and item–total correlations, internal consistency, and associations between scale scores and a previous 8-item measure of WSC. We evaluated test–retest reliability by repeating the survey with 900 of the respondents 2 weeks later. The overall scale reliability was quantified by an intraclass coefficient and the standard error of measurement. We evaluated convergent validity by examining the association with several relevant workplace psychosocial factors using a dataset from workers employed by an electrical components company (n = 2,975). Results The scale was unidimensional. The item–item and item–total correlations ranged from 0.52 to 0.78 (p < 0.01) and from 0.79 to 0.89 (p < 0.01), respectively. Internal consistency was good (Cronbach’s α coefficient: 0.93). The correlation with the 8-item scale indicated high criterion validity (r = 0.81) and the scale showed high test–retest reliability (r = 0.74, p < 0.01). The intraclass coefficient and standard error of measurement were 0.74 (95% confidence intervals: 0.71–0.77) and 4.04 (95% confidence intervals: 1.86–6.20), respectively. Correlations with relevant workplace psychosocial factors showed convergent validity. Conclusions The results confirmed that the newly developed WSC scale has adequate psychometric properties. PMID:28662058

  20. Optimal configurations of spatial scale for grid cell firing under noise and uncertainty

    PubMed Central

    Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil

    2014-01-01

    We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144

  1. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    PubMed

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  2. Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback

    PubMed Central

    Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching

    2017-01-01

    Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658

  3. Paradigm Shifts in Voluntary Force Control and Motor Unit Behaviors with the Manipulated Size of Visual Error Perception

    PubMed Central

    Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou

    2017-01-01

    The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs. PMID:28348530

  4. Paradigm Shifts in Voluntary Force Control and Motor Unit Behaviors with the Manipulated Size of Visual Error Perception.

    PubMed

    Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou

    2017-01-01

    The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs.

  5. A high accuracy magnetic heading system composed of fluxgate magnetometers and a microcomputer

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-Wu; Zhang, Zhao-Nian; Hung, James C.

    The authors present a magnetic heading system consisting of two fluxgate magnetometers and a single-chip microcomputer. The system, when compared to gyro compasses, is smaller in size, lighter in weight, simpler in construction, quicker in reaction time, free from drift, and more reliable. Using a microcomputer in the system, heading error due to compass deviation, sensor offsets, scale factor uncertainty, and sensor tilts can be compensated with the help of an error model. The laboratory test of a typical system showed that the accuracy of the system was improved from more than 8 deg error without error compensation to less than 0.3 deg error with compensation.

  6. Children's Social Desirability and Dietary Reports.

    PubMed

    Baxter, Suzanne Domel; Smith, Albert F; Litaker, Mark S; Baglio, Michelle L; Guinn, Caroline H; Shaffer, Nicole M

    2004-01-01

    We investigated telephone administration of the Children's Social Desirability (CSD) scale and our adaptation for children of the Social Desirability for Food scale (C-SDF). Each of 100 4th-graders completed 2 telephone interviews 28 days apart. CSD scores had adequate internal consistency and test-retest reliability, and a 14-item subset was identified that sufficiently measures the same construct. Our C-SDF scale performed less well in terms of internal consistency and test-retest reliability; factor analysis revealed 2 factors, 1 of which was moderately related to the CSD. The 14-item subset of the CSD scale may help researchers understand error in children's dietary reports.

  7. Children's Social Desirability and Dietary Reports

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Litaker, Mark S.; Baglio, Michelle L.; Guinn, Caroline H.; Shaffer, Nicole M.

    2005-01-01

    We investigated telephone administration of the Children's Social Desirability (CSD) scale and our adaptation for children of the Social Desirability for Food scale (C-SDF). Each of 100 4th-graders completed 2 telephone interviews 28 days apart. CSD scores had adequate internal consistency and test—retest reliability, and a 14-item subset was identified that sufficiently measures the same construct. Our C-SDF scale performed less well in terms of internal consistency and test—retest reliability; factor analysis revealed 2 factors, 1 of which was moderately related to the CSD. The 14-item subset of the CSD scale may help researchers understand error in children's dietary reports. PMID:15068757

  8. Terrestrial Water Storage in African Hydrological Regimes Derived from GRACE Mission Data: Intercomparison of Spherical Harmonics, Mass Concentration, and Scalar Slepian Methods.

    PubMed

    Rateb, Ashraf; Kuo, Chung-Yen; Imani, Moslem; Tseng, Kuo-Hsin; Lan, Wen-Hau; Ching, Kuo-En; Tseng, Tzu-Pang

    2017-03-10

    Spherical harmonics (SH) and mascon solutions are the two most common types of solutions for Gravity Recovery and Climate Experiment (GRACE) mass flux observations. However, SH signals are degraded by measurement and leakage errors. Mascon solutions (the Jet Propulsion Laboratory (JPL) release, herein) exhibit weakened signals at submascon resolutions. Both solutions require a scale factor examined by the CLM4.0 model to obtain the actual water storage signal. The Slepian localization method can avoid the SH leakage errors when applied to the basin scale. In this study, we estimate SH errors and scale factors for African hydrological regimes. Then, terrestrial water storage (TWS) in Africa is determined based on Slepian localization and compared with JPL-mascon and SH solutions. The three TWS estimates show good agreement for the TWS of large-sized and humid regimes but present discrepancies for the TWS of medium and small-sized regimes. Slepian localization is an effective method for deriving the TWS of arid zones. The TWS behavior in African regimes and its spatiotemporal variations are then examined. The negative TWS trends in the lower Nile and Sahara at -1.08 and -6.92 Gt/year, respectively, are higher than those previously reported.

  9. Terrestrial Water Storage in African Hydrological Regimes Derived from GRACE Mission Data: Intercomparison of Spherical Harmonics, Mass Concentration, and Scalar Slepian Methods

    PubMed Central

    Rateb, Ashraf; Kuo, Chung-Yen; Imani, Moslem; Tseng, Kuo-Hsin; Lan, Wen-Hau; Ching, Kuo-En; Tseng, Tzu-Pang

    2017-01-01

    Spherical harmonics (SH) and mascon solutions are the two most common types of solutions for Gravity Recovery and Climate Experiment (GRACE) mass flux observations. However, SH signals are degraded by measurement and leakage errors. Mascon solutions (the Jet Propulsion Laboratory (JPL) release, herein) exhibit weakened signals at submascon resolutions. Both solutions require a scale factor examined by the CLM4.0 model to obtain the actual water storage signal. The Slepian localization method can avoid the SH leakage errors when applied to the basin scale. In this study, we estimate SH errors and scale factors for African hydrological regimes. Then, terrestrial water storage (TWS) in Africa is determined based on Slepian localization and compared with JPL-mascon and SH solutions. The three TWS estimates show good agreement for the TWS of large-sized and humid regimes but present discrepancies for the TWS of medium and small-sized regimes. Slepian localization is an effective method for deriving the TWS of arid zones. The TWS behavior in African regimes and its spatiotemporal variations are then examined. The negative TWS trends in the lower Nile and Sahara at −1.08 and −6.92 Gt/year, respectively, are higher than those previously reported. PMID:28287453

  10. Simultaneous retrieval of the solar EUV flux and neutral thermospheric O, O2, N2, and temperature from twilight airglow

    NASA Technical Reports Server (NTRS)

    Fennelly, J. A.; Torr, D. G.; Richards, P. G.; Torr, M. R.

    1994-01-01

    We present a method to retrieve neutral thermospheric composition and the solar EUV flux from ground-based twilight optical measurements of the O(+) ((exp 2)P) 7320 A and O((exp 1)D) 6300 A airglow emissions. The parameters retrieved are the neutral temperature, the O, O2, N2 density profiles, and a scaling factor for the solar EUV flux spectrum. The temperature, solar EUV flux scaling factor, and atomic oxygen density are first retrieved from the 7320-A emission, which are then used with the 6300-A emission to retrieve the O2 and N2 densities. The retrieval techniques have been verified by computer simulations. We have shown that the retrieval technique is able to statistically retrieve values, between 200 and 400 km, within an average error of 3.1 + or - 0.6% for thermospheric temperature, 3.3 + or - 2.0% for atomic oxygen, 2.3 + or - 1.3% for molecular oxygen, and 2.4 + or - 1.3% for molecular nitrogen. The solar EUV flux scaling factor was found to have a retrieval error of 5.1 + or - 2.3%. All the above errors have a confidence level of 95%. The purpose of this paper is to prove the viability and usefulness of the retrieval technique by demonstrating the ability to retrieve known quantities under a realistic simulation of the measurement process, excluding systematic effects.

  11. Requirements for fault-tolerant factoring on an atom-optics quantum computer.

    PubMed

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2013-01-01

    Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.

  12. A validation study of the psychometric properties of the Groningen Reflection Ability Scale.

    PubMed

    Andersen, Nina Bjerre; O'Neill, Lotte; Gormsen, Lise Kirstine; Hvidberg, Line; Morcke, Anne Mette

    2014-10-10

    Reflection, the ability to examine critically one's own learning and functioning, is considered important for 'the good doctor'. The Groningen Reflection Ability Scale (GRAS) is an instrument measuring student reflection, which has not yet been validated beyond the original Dutch study. The aim of this study was to adapt GRAS for use in a Danish setting and to investigate the psychometric properties of GRAS-DK. We performed a cross-cultural adaptation of GRAS from Dutch to Danish. Next, we collected primary data online, performed a retest, analysed data descriptively, estimated measurement error, performed an exploratory and a confirmatory factor analysis to test the proposed three-factor structure. 361 (69%) of 523 invited students completed GRAS-DK. Their mean score was 88 (SD = 11.42; scale maximum 115). Scores were approximately normally distributed. Measurement error and test-retest score differences were acceptable, apart from a few extreme outliers. However, the confirmatory factor analysis did not replicate the original three-factor model and neither could a one-dimensional structure be confirmed. GRAS is already in use, however we advise that use of GRAS-DK for effect measurements and group comparison awaits further review and validation studies. Our negative finding might be explained by a weak conceptualisation of personal reflection.

  13. New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations

    NASA Technical Reports Server (NTRS)

    Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.

    2012-01-01

    In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.

  14. A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zhang, Xubin; Tan, Zhe-Min

    2017-04-01

    The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .

  15. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  16. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  17. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-03-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shaanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modeled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modeled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI), elevation and aspect have small and additive effects on improving the spatial scaling between these two resolutions.

  18. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-07-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modelled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modelled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI) and elevation have small and additive effects on improving the spatial scaling between these two resolutions.

  19. Evaluating impact level of different factors in environmental impact assessment for incinerator plants using GM (1, N) model.

    PubMed

    Pai, T Y; Chiou, R J; Wen, H H

    2008-01-01

    In this study, the impact levels in environmental impact assessment (EIA) reports of 10 incinerator plants were quantified and discussed. The relationship between the quantified impact levels and the plant scale factors of BeiTou, LiZe, BaLi, LuTsao, RenWu, PingTung, SiJhou and HsinChu were constructed, and the impact levels of the GangShan (GS) and YongKong (YK) plants were predicted using grey model GM (1, N). Finally, the effects of plant scale factors on impact levels were evaluated using grey model GM (1, N) too. According to the predicted results of GM, the relative errors of topography/geology/soil, air quality, hydrology/water quality, solid waste, noise, terrestrial fauna/flora, aquatic fauna/flora and traffic in the GS plant were 17%, 14%, 15%, 17%, 75%, 16%, 13%, and 37%, respectively. The relative errors of the same environmental items in the YK plant were 1%, 18%, 10%, 40%, 37%, 3%, 25% and 33%, respectively. According to GM (1, N), design capacity (DC) and heat value (HV) were the plant scale factors that affected the impact levels significantly in each environmental item, and thus were the most significant plant scale factors. GM (1, N) was effective in predicting the environmental impact and analyzing the reasonableness of the impact. If there is an EIA for a new incinerator plant to be reviewed in the future, the official committee of the Taiwan EPA could review the reasonableness of impact levels in EIA reports quickly.

  20. Mapping GRACE Accelerometer Error

    NASA Astrophysics Data System (ADS)

    Sakumura, C.; Harvey, N.; McCullough, C. M.; Bandikova, T.; Kruizinga, G. L. H.

    2017-12-01

    After more than fifteen years in orbit, instrument noise, and accelerometer noise in particular, remains one of the limiting error sources for the NASA/DLR Gravity Recovery and Climate Experiment mission. The recent V03 Level-1 reprocessing campaign used a Kalman filter approach to produce a high fidelity, smooth attitude solution fusing star camera and angular acceleration data. This process provided an unprecedented method for analysis and error estimation of each instrument. The accelerometer exhibited signal aliasing, differential scale factors between electrode plates, and magnetic effects. By applying the noise model developed for the angular acceleration data to the linear measurements, we explore the magnitude and geophysical pattern of gravity field error due to the electrostatic accelerometer.

  1. Factor structure and psychometric properties of the Fertility Problem Inventory–Short Form

    PubMed Central

    Zurlo, Maria Clelia; Cattaneo Della Volta, Maria Franscesca; Vallone, Federica

    2017-01-01

    The study analyses factor structure and psychometric properties of the Italian version of the Fertility Problem Inventory–Short Form. A sample of 206 infertile couples completed the Italian version of Fertility Problem Inventory (46 items) with demographics, State Anxiety Scale of State-Trait Anxiety Inventory (Form Y), Edinburgh Depression Scale and Dyadic Adjustment Scale, used to assess convergent and discriminant validity. Confirmatory factor analysis was unsatisfactory (comparative fit index = 0.87; Tucker-Lewis Index = 0.83; root mean square error of approximation = 0.17), and Cronbach’s α (0.95) revealed a redundancy of items. Exploratory factor analysis was carried out deleting cross-loading items, and Mokken scale analysis was applied to verify the items homogeneity within the reduced subscales of the questionnaire. The Fertility Problem Inventory–Short Form consists of 27 items, tapping four meaningful and reliable factors. Convergent and discriminant validity were confirmed. Findings indicated that the Fertility Problem Inventory–Short Form is a valid and reliable measure to assess infertility-related stress dimensions. PMID:29379625

  2. Defining near misses: towards a sharpened definition based on empirical data about error handling processes.

    PubMed

    Kessels-Habraken, Marieke; Van der Schaaf, Tjerk; De Jonge, Jan; Rutte, Christel

    2010-05-01

    Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and correction. Reporting and analysis of so-called near misses - usually defined as incidents without adverse consequences for patients - are necessary to gather information about successful error recovery mechanisms. This study establishes the need for a clearer and more consistent definition of near misses to enable large-scale reporting and analysis in order to obtain such information. Qualitative incident reports and interviews were collected on four units of two Dutch general hospitals. Analysis of the 143 accompanying error handling processes demonstrated that different incident types each provide unique information about error handling. Specifically, error handling processes underlying incidents that did not reach the patient differed significantly from those of incidents that reached the patient, irrespective of harm, because of successful countermeasures that had been taken after error detection. We put forward two possible definitions of near misses and argue that, from a practical point of view, the optimal definition may be contingent on organisational context. Both proposed definitions could yield large-scale reporting of near misses. Subsequent analysis could enable health care organisations to improve the safety and quality of care proactively by (1) eliminating failure factors before real accidents occur, (2) enhancing their ability to intercept errors in time, and (3) improving their safety culture. Copyright 2010 Elsevier Ltd. All rights reserved.

  3. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  4. Development of a Self-Calibrated MEMS Gyrocompass for North-Finding and Tracking

    NASA Astrophysics Data System (ADS)

    Prikhodko, Igor P.

    This Ph.D. dissertation presents development of a microelectromechanical (MEMS) gyrocompass for north-finding and north-tracking applications. The central part of this work enabling these applications is control and self-calibration architectures for drift mitigation over thermal environments, validated using a MEMS quadruple mass gyroscope. The thesis contributions are the following: • Adapted and implemented bias and scale-factor drifts compensation algorithm relying on temperature self-sensing for MEMS gyroscopes with high quality factors. The real-time self-compensation reduced a total bias error to 2 °/hr and a scale-factor error to 500 ppm over temperature range of 25 °C to 55 °C (on par with the state-of-the-art). • Adapted and implemented a scale-factor self-calibration algorithm previously employed for macroscale hemispherical resonator gyroscope to MEMS Coriolis vibratory gyroscopes. An accuracy of 100 ppm was demonstrated by simultaneously measuring the true and estimated scale-factors over temperature variations (on par with the state-of-the art). • Demonstrated north-finding accuracy satisfying a typical mission requirement of 4 meter target location error at 1 kilometer stand-off distance (on par with a GPS accuracy). Analyzed north-finding mechanizations trade-offs for MEMS vibratory gyroscopes and demonstrated measurements of the Earth's rotation (15 °/hr). • Demonstrated, for the first time, an angle measuring MEMS gyroscope operation for north-tracking applications in a +/-500 °/s rate range and 100 Hz bandwidth, eliminating both bandwidth and range constraints of conventional open-loop Coriolis vibratory gyroscopes. • Investigated hypothesis that surface-tension driven glass-blowing microfabrication can create highly spherical shells for 3-D MEMS. Without any trimming or tuning of the natural frequencies, a 1 MHz glass-blown 3-D microshell resonator demonstrated a 0.63 % frequency mismatch between two degenerate 4-node wineglass modes. • Multi-axis rotation detection for nuclear magnetic resonance (NMR) gyroscope was proposed and developed. The analysis of cross-axis sensitivities for NMR gyroscope was performed. The framework for the analysis of NMR gyroscope dynamics for both open loop and closed loop modes of operation was developed.

  5. Apparatus, Method and Program Storage Device for Determining High-Energy Neutron/Ion Transport to a Target of Interest

    NASA Technical Reports Server (NTRS)

    Wilson, John W. (Inventor); Tripathi, Ram K. (Inventor); Cucinotta, Francis A. (Inventor); Badavi, Francis F. (Inventor)

    2012-01-01

    An apparatus, method and program storage device for determining high-energy neutron/ion transport to a target of interest. Boundaries are defined for calculation of a high-energy neutron/ion transport to a target of interest; the high-energy neutron/ion transport to the target of interest is calculated using numerical procedures selected to reduce local truncation error by including higher order terms and to allow absolute control of propagated error by ensuring truncation error is third order in step size, and using scaling procedures for flux coupling terms modified to improve computed results by adding a scaling factor to terms describing production of j-particles from collisions of k-particles; and the calculated high-energy neutron/ion transport is provided to modeling modules to control an effective radiation dose at the target of interest.

  6. The intercrater plains of Mercury and the Moon: Their nature, origin and role in terrestrial planet evolution. Areal measurement of Mercury's first quadrant. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Leake, M. A.

    1982-01-01

    Various linear and areal measurements of Mercury's first quadrant which were used in geological map preparation, map analysis, and statistical surveys of crater densities are discussed. Accuracy of each method rests on the determination of the scale of the photograph, i.e., the conversion factor between distances on the planet (in km) and distances on the photograph (in cm). Measurement errors arise due to uncertainty in Mercury's radius, poor resolution, poor coverage, high Sun angle illumination in the limb regions, planetary curvature, limited precision in measuring instruments, and inaccuracies in the printed map scales. Estimates are given for these errors.

  7. Particle swarm optimization algorithm based low cost magnetometer calibration

    NASA Astrophysics Data System (ADS)

    Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.

    2011-12-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments

  8. The intercrater plains of Mercury and the Moon: Their nature, origin and role in terrestrial planet evolution. Measurement and errors of crater statistics. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Leake, M. A.

    1982-01-01

    Planetary imagery techniques, errors in measurement or degradation assignment, and statistical formulas are presented with respect to cratering data. Base map photograph preparation, measurement of crater diameters and sampled area, and instruments used are discussed. Possible uncertainties, such as Sun angle, scale factors, degradation classification, and biases in crater recognition are discussed. The mathematical formulas used in crater statistics are presented.

  9. Human Factors: Tenerife Revisited

    DOT National Transportation Integrated Search

    1998-01-01

    A collision between two 747 jumbo jets occurred at the Los Rodeos airport in Tenerife, on the Canary Islands cost the lives of 583 people. This case study of that collision shows how large scale disasters result from errors made by people in crucial ...

  10. Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects

    PubMed Central

    Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.

    2015-01-01

    We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg−1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341

  11. Stable isotope signatures and trophic-step fractionation factors of fish tissues collected as non-lethal surrogates of dorsal muscle.

    PubMed

    Busst, Georgina M A; Bašić, Tea; Britton, J Robert

    2015-08-30

    Dorsal white muscle is the standard tissue analysed in fish trophic studies using stable isotope analyses. As muscle is usually collected destructively, fin tissues and scales are often used as non-lethal surrogates; we examined the utility of scales and fin tissue as muscle surrogates. The muscle, fin and scale δ(15) N and δ(13) C values from 10 cyprinid fish species determined with an elemental analyser coupled with an isotope ratio mass spectrometer were compared. The fish comprised (1) samples from the wild, and (2) samples from tank aquaria, using six species held for 120 days and fed a single food resource. Relationships between muscle, fin and scale isotope ratios were examined for each species and for the entire dataset, with the efficacy of four methods of predicting muscle isotope ratios from fin and scale values being tested. The fractionation factors between the three tissues of the laboratory fishes and their food resource were then calculated and applied to Bayesian mixing models to assess their effect on fish diet predictions. The isotopic data of the three tissues per species were distinct, but were significantly related, enabling estimations of muscle values from the two surrogates. Species-specific equations provided the least erroneous corrections of scale and fin isotope ratios (errors < 0.6‰). The fractionation factors for δ(15) N values were in the range obtained for other species, but were often higher for δ(13) C values. Their application to data from two fish populations in the mixing models resulted in significant alterations in diet predictions. Scales and fin tissue are strong surrogates of dorsal muscle in food web studies as they can provide estimates of muscle values within an acceptable level of error when species-specific methods are used. Their derived fractionation factors can also be applied to models predicting fish diet composition from δ(15) N and δ(13) C values. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Cross-Cultural Adaptation of the Male Genital Self-Image Scale in Iranian Men.

    PubMed

    Saffari, Mohsen; Pakpour, Amir H; Burri, Andrea

    2016-03-01

    Certain sexual health problems in men can be attributed to genital self-image. Therefore, a culturally adapted version of a Male Genital Self-Image Scale (MGSIS) could help health professionals understand this concept and its associated correlates. To translate the original English version of the MGSIS into Persian and to assess the psychometric properties of this culturally adapted version (MGSIS-I) for use in Iranian men. In total, 1,784 men were recruited for this cross-sectional study. Backward and forward translations of the MGSIS were used to produce the culturally adapted version. Reliability of the MGSIS-I was assessed using Cronbach α and intra-class correlation coefficients. Divergent and convergent validities were examined using Pearson correlation and known-group validity was assessed in subgroups of participants with different sociodemographic statuses. Factor validity of the scale was investigated using exploratory and confirmatory factor analyses. Demographic information, the International Index of Erectile Function, the Body Appreciation Scale, the Rosenberg Self-Esteem Scale, and the MGSIS. Mean age of participants was 38.13 years (SD = 11.45) and all men were married. Cronbach α of the MGSIS-I was 0.89 and interclass correlation coefficients ranged from 0.70 to 0.94. Significant correlations were found between the MGSIS-I and the International Index of Erectile Function (P < .01), whereas correlation of the scale with non-similar scales was lower than with similar scale (confirming convergent and divergent validity). The scale could differentiate between subgroups in age, smoking status, and income (known-group validity). A single-factor solution that explained 70% variance of the scale was explored using exploratory factor analysis (confirming uni-dimensionality); confirmatory factor analysis indicated better fitness for the five-item version than the seven-item version of the MGSIS-I (root mean square error of approximation = 0.05, comparative fit index > 1.00 vs root mean square error of approximation = 0.10, comparative fit index > 0.97, respectively). The MGSIS-I is a useful instrument to assess genital self-image in Iranian men, a concept that has been associated with sexual function. Further investigation is needed to identify the applicability of the scale in other cultures or populations. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Reducing Sun Exposure for Prevention of Skin Cancers: Factorial Invariance and Reliability of the Self-Efficacy Scale for Sun Protection

    PubMed Central

    Babbin, Steven F.; Yin, Hui-Qing; Rossi, Joseph S.; Redding, Colleen A.; Paiva, Andrea L.; Velicer, Wayne F.

    2015-01-01

    The Self-Efficacy Scale for Sun Protection consists of two correlated factors with three items each for Sunscreen Use and Avoidance. This study evaluated two crucial psychometric assumptions, factorial invariance and scale reliability, with a sample of adults (N = 1356) participating in a computer-tailored, population-based intervention study. A measure has factorial invariance when the model is the same across subgroups. Three levels of invariance were tested, from least to most restrictive: (1) Configural Invariance (nonzero factor loadings unconstrained); (2) Pattern Identity Invariance (equal factor loadings); and (3) Strong Factorial Invariance (equal factor loadings and measurement errors). Strong Factorial Invariance was a good fit for the model across seven grouping variables: age, education, ethnicity, gender, race, skin tone, and Stage of Change for Sun Protection. Internal consistency coefficient Alpha and factor rho scale reliability, respectively, were .84 and .86 for Sunscreen Use, .68 and .70 for Avoidance, and .78 and .78 for the global (total) scale. The psychometric evidence demonstrates strong empirical support that the scale is consistent, has internal validity, and can be used to assess population-based adult samples. PMID:26457203

  14. Developing a Basic Scale for Workers' Psychological Burden from the Perspective of Occupational Safety and Health.

    PubMed

    Kim, Kyung Woo; Lim, Ho Chan; Park, Jae Hee; Park, Sang Gyu; Park, Ye Jin; Cho, Hm Hak

    2018-06-01

    Organizations are pursing complex and diverse aims to generate higher profits. Many workers experience high work intensity such as workload and work pressure in this organizational environment. Especially, psychological burden is a commonly used term in workplace of Republic of Korea. This study focused on defining the psychological burden from the perspective of occupational safety and health and tried to develop a scale for psychological burden. The 48 preliminary questionnaire items for psychological burden were prepared by a focus group interview with 16 workers through the Copenhagen Psychosocial Questionnaire II and Mindful Awareness Attention Scale. The preliminary items were surveyed with 572 workers, and exploratory factor analysis, confirmatory factor analysis, and correlation analysis were conducted for a new scale. As a result of the exploratory factor analysis, five factors were extracted: organizational activity, human error, safety and health workload, work attitude, and negative self-management. These factors had significant correlations and reliability, and the stability of the model for validity was confirmed using confirmatory factor analysis. The developed scale for psychological burden can measure workers' psychological burden in relation to safety and health. Despite some limitations, this study has applicability in the workplace, given the relatively small-sized questionnaire.

  15. Application of Improved 5th-Cubature Kalman Filter in Initial Strapdown Inertial Navigation System Alignment for Large Misalignment Angles.

    PubMed

    Wang, Wei; Chen, Xiyuan

    2018-02-23

    In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm.

  16. A proposed method to investigate reliability throughout a questionnaire.

    PubMed

    Wentzel-Larsen, Tore; Norekvål, Tone M; Ulvik, Bjørg; Nygård, Ottar; Pripp, Are H

    2011-10-05

    Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure--to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales.

  17. Measuring Individual Differences in the Perfect Automation Schema.

    PubMed

    Merritt, Stephanie M; Unnerstall, Jennifer L; Lee, Deborah; Huber, Kelli

    2015-08-01

    A self-report measure of the perfect automation schema (PAS) is developed and tested. Researchers have hypothesized that the extent to which users possess a PAS is associated with greater decreases in trust after users encounter automation errors. However, no measure of the PAS currently exists. We developed a self-report measure assessing two proposed PAS factors: high expectations and all-or-none thinking about automation performance. In two studies, participants responded to our PAS measure, interacted with imperfect automated aids, and reported trust. Each of the two PAS measure factors demonstrated fit to the hypothesized factor structure and convergent and discriminant validity when compared with propensity to trust machines and trust in a specific aid. However, the high expectations and all-or-none thinking scales showed low intercorrelations and differential relationships with outcomes, suggesting that they might best be considered two separate constructs rather than two subfactors of the PAS. All-or-none thinking had significant associations with decreases in trust following aid errors, whereas high expectations did not. Results therefore suggest that the all-or-none thinking scale may best represent the PAS construct. Our PAS measure (specifically, the all-or-none thinking scale) significantly predicted the severe trust decreases thought to be associated with high PAS. Further, it demonstrated acceptable psychometric properties across two samples. This measure may be used in future work to assess levels of PAS in users of automated systems in either research or applied settings. © 2015, Human Factors and Ergonomics Society.

  18. Generating classes of 3D virtual mandibles for AR-based medical simulation.

    PubMed

    Hippalgaonkar, Neha R; Sider, Alexa D; Hamza-Lup, Felix G; Santhanam, Anand P; Jaganathan, Bala; Imielinska, Celina; Rolland, Jannick P

    2008-01-01

    Simulation and modeling represent promising tools for several application domains from engineering to forensic science and medicine. Advances in 3D imaging technology convey paradigms such as augmented reality (AR) and mixed reality inside promising simulation tools for the training industry. Motivated by the requirement for superimposing anatomically correct 3D models on a human patient simulator (HPS) and visualizing them in an AR environment, the purpose of this research effort was to develop and validate a method for scaling a source human mandible to a target human mandible within a 2 mm root mean square (RMS) error. Results show that, given a distance between 2 same landmarks on 2 different mandibles, a relative scaling factor may be computed. Using this scaling factor, results show that a 3D virtual mandible model can be made morphometrically equivalent to a real target-specific mandible within a 1.30 mm RMS error. The virtual mandible may be further used as a reference target for registering other anatomic models, such as the lungs, on the HPS. Such registration will be made possible by physical constraints among the mandible and the spinal column in the horizontal normal rest position.

  19. On estimating the basin-scale ocean circulation from satellite altimetry. Part 1: Straightforward spherical harmonic expansion

    NASA Technical Reports Server (NTRS)

    Tai, Chang-Kou

    1988-01-01

    Direct estimation of the absolute dynamic topography from satellite altimetry has been confined to the largest scales (basically the basin-scale) owing to the fact that the signal-to-noise ratio is more unfavorable everywhere else. But even for the largest scales, the results are contaminated by the orbit error and geoid uncertainties. Recently a more accurate Earth gravity model (GEM-T1) became available, providing the opportunity to examine the whole question of direct estimation under a more critical limelight. It is found that our knowledge of the Earth's gravity field has indeed improved a great deal. However, it is not yet possible to claim definitively that our knowledge of the ocean circulation has improved through direct estimation. Yet, the improvement in the gravity model has come to the point that it is no longer possible to attribute the discrepancy at the basin scales between altimetric and hydrographic results as mostly due to geoid uncertainties. A substantial part of the difference must be due to other factors; i.e., the orbit error, or the uncertainty of the hydrographically derived dynamic topography.

  20. Ability Self-Estimates and Self-Efficacy: Meaningfully Distinct?

    ERIC Educational Resources Information Center

    Bubany, Shawn T.; Hansen, Jo-Ida C.

    2010-01-01

    Conceptual differences between self-efficacy and ability self-estimate scores, used in vocational psychology and career counseling, were examined with confirmatory factor analysis, discriminate relations, and reliability analysis. Results suggest that empirical differences may be due to measurement error or scale content, rather than due to the…

  1. Error analysis and experiments of attitude measurement using laser gyroscope

    NASA Astrophysics Data System (ADS)

    Ren, Xin-ran; Ma, Wen-li; Jiang, Ping; Huang, Jin-long; Pan, Nian; Guo, Shuai; Luo, Jun; Li, Xiao

    2018-03-01

    The precision of photoelectric tracking and measuring equipment on the vehicle and vessel is deteriorated by the platform's movement. Specifically, the platform's movement leads to the deviation or loss of the target, it also causes the jitter of visual axis and then produces image blur. In order to improve the precision of photoelectric equipment, the attitude of photoelectric equipment fixed with the platform must be measured. Currently, laser gyroscope is widely used to measure the attitude of the platform. However, the measurement accuracy of laser gyro is affected by its zero bias, scale factor, installation error and random error. In this paper, these errors were analyzed and compensated based on the laser gyro's error model. The static and dynamic experiments were carried out on a single axis turntable, and the error model was verified by comparing the gyro's output with an encoder with an accuracy of 0.1 arc sec. The accuracy of the gyroscope has increased from 7000 arc sec to 5 arc sec for an hour after error compensation. The method used in this paper is suitable for decreasing the laser gyro errors in inertial measurement applications.

  2. Calibration of the aerodynamic coefficient identification package measurements from the shuttle entry flights using inertial measurement unit data

    NASA Technical Reports Server (NTRS)

    Heck, M. L.; Findlay, J. T.; Compton, H. R.

    1983-01-01

    The Aerodynamic Coefficient Identification Package (ACIP) is an instrument consisting of body mounted linear accelerometers, rate gyros, and angular accelerometers for measuring the Space Shuttle vehicular dynamics. The high rate recorded data are utilized for postflight aerodynamic coefficient extraction studies. Although consistent with pre-mission accuracies specified by the manufacturer, the ACIP data were found to contain detectable levels of systematic error, primarily bias, as well as scale factor, static misalignment, and temperature dependent errors. This paper summarizes the technique whereby the systematic ACIP error sources were detected, identified, and calibrated with the use of recorded dynamic data from the low rate, highly accurate Inertial Measurement Units.

  3. Decreased attention to object size information in scale errors performers.

    PubMed

    Grzyb, Beata J; Cangelosi, Angelo; Cattani, Allegra; Floccia, Caroline

    2017-05-01

    Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children's decreased attention to object size information. This study investigated the attention to object size information in scale errors performers. Two groups of children aged 18-25 months (N=52) and 48-60 months (N=23) were tested in two consecutive tasks: an action task that replicated the original scale errors elicitation situation, and a looking task that involved watching on a computer screen actions performed with adequate to inadequate size object. Our key finding - that children performing scale errors in the action task subsequently pay less attention to size changes than non-scale errors performers in the looking task - suggests that the origins of scale errors in childhood operate already at the perceptual level, and not at the action level. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Psychometric Testing of the Persian Version of the Perceived Perioperative Competence Scale-Revised.

    PubMed

    Ajorpaz, Neda Mirbagher; Tafreshi, Mansoureh Zagheri; Mohtashami, Jamileh; Zayeri, Farid; Rahemi, Zahra

    2017-12-01

    The clinical competence of nursing students in operating room (OR) is an important issue in nursing education. The purpose of this study was to evaluate the psychometric properties of the Persian Perceived Perioperative Competence Scale-Revised (PPCS-R) instrument. This cross-sectional study was conducted across 12 universities in Iran. The psychometric properties and factor structure of the PPCS-R for OR students was examined. Based on the results of factor analysis, seven items were removed from the original version of the scale. The fitness indices of the Persian scale include comparative fit index (CFI) = .90, goodness-of-fit-index (GFI) = .86, adjusted goodness-of-fit index (AGFI) = .90, normed fit index (NFI) = .84, and root mean square error of approximation (RMSEA) = .04. High validity and reliability indicated the scale's value for measuring perceived perioperative competence of Iranian OR students.

  5. Cross-validation of the Student Perceptions of Team-Based Learning Scale in the United States.

    PubMed

    Lein, Donald H; Lowman, John D; Eidson, Christopher A; Yuen, Hon K

    2017-01-01

    The purpose of this study was to cross-validate the factor structure of the previously developed Student Perceptions of Team-Based Learning (TBL) Scale among students in an entry-level doctor of physical therapy (DPT) program in the United States. Toward the end of the semester in 2 patient/client management courses taught using TBL, 115 DPT students completed the Student Perceptions of TBL Scale, with a response rate of 87%. Principal component analysis (PCA) and confirmatory factor analysis (CFA) were conducted to replicate and confirm the underlying factor structure of the scale. Based on the PCA for the validation sample, the original 2-factor structure (preference for TBL and preference for teamwork) of the Student Perceptions of TBL Scale was replicated. The overall goodness-of-fit indices from the CFA suggested that the original 2-factor structure for the 15 items of the scale demonstrated a good model fit (comparative fit index, 0.95; non-normed fit index/Tucker-Lewis index, 0.93; root mean square error of approximation, 0.06; and standardized root mean square residual, 0.07). The 2 factors demonstrated high internal consistency (alpha= 0.83 and 0.88, respectively). DPT students taught using TBL viewed the factor of preference for teamwork more favorably than preference for TBL. Our findings provide evidence supporting the replicability of the internal structure of the Student Perceptions of TBL Scale when assessing perceptions of TBL among DPT students in patient/client management courses.

  6. Development and Validation of a Spanish Version of the Grit-S Scale

    PubMed Central

    Arco-Tirado, Jose L.; Fernández-Martín, Francisco D.; Hoyle, Rick H.

    2018-01-01

    This paper describes the development and initial validation of a Spanish version of the Short Grit (Grit-S) Scale. The Grit-S Scale was adapted and translated into Spanish using the Translation, Review, Adjudication, Pre-testing, and Documentation model and responses to a preliminary set of items from a large sample of university students (N = 1,129). The resultant measure was validated using data from a large stratified random sample of young adults (N = 1,826). Initial validation involved evaluating the internal consistency of the adapted scale and its subscales and comparing the factor structure of the adapted version to that of the original scale. The results were comparable to results from similar analyses of the English version of the scale. Although the internal consistency of the subscales was low, the internal consistency of the full scale was well-within the acceptable range. A two-factor model offered an acceptable account of the data; however, when a single correlated error involving two highly similar items was included, a single factor model fit the data very well. The results support the use of overall scores from the Spanish Grit-S Scale in future research. PMID:29467705

  7. Life satisfaction and self-reported problems after spinal cord injury: measurement of underlying dimensions.

    PubMed

    Krause, James S; Reed, Karla S

    2009-08-01

    Evaluate the utility of the current 7-scale structure of the Life Situation Questionnaire-Revised (LSQ-R) using confirmatory factor analysis (CFA) and explore the factor structure of each set of items. Adults (N = 1,543) with traumatic spinal cord injury (SCI) were administered the 20 satisfaction and 30 problems items from the LSQ-R. CFA suggests that the existing 7-scale structure across the 50 items was within the acceptable range (root-mean-square error of approximation [RMSEA] = 0.078), although it fell just outside of this range for women. Factor analysis revealed 3 satisfaction factors and 6 problems factors. The overall fit of the problems items (RMSEA = 0.070) was superior to that of the satisfaction items (RMSEA = 0.80). RMSEA fell just outside of the acceptable range for Whites and men on the satisfaction scales. All scales had acceptable internal consistency. Results suggest the original scoring of the LSQ-R remains viable, although individual results should be reviewed for special population. Factor analysis of subsets of items allows satisfaction and problems items to be used independently, depending on the study purpose. (c) 2009 APA

  8. Fuzzy Control of Robotic Arm

    NASA Astrophysics Data System (ADS)

    Lin, Kyaw Kyaw; Soe, Aung Kyaw; Thu, Theint Theint

    2008-10-01

    This research work investigates a Self-Tuning Proportional Derivative (PD) type Fuzzy Logic Controller (STPDFLC) for a two link robot system. The proposed scheme adjusts on-line the output Scaling Factor (SF) by fuzzy rules according to the current trend of the robot. The rule base for tuning the output scaling factor is defined on the error (e) and change in error (de). The scheme is also based on the fact that the controller always tries to manipulate the process input. The rules are in the familiar if-then format. All membership functions for controller inputs (e and de) and controller output (UN) are defined on the common interval [-1,1]; whereas the membership functions for the gain updating factor (α) is defined on [0,1]. There are various methods to calculate the crisp output of the system. Center of Gravity (COG) method is used in this application due to better results it gives. Performances of the proposed STPDFLC are compared with those of their corresponding PD-type conventional Fuzzy Logic Controller (PDFLC). The proposed scheme shows a remarkably improved performance over its conventional counterpart especially under parameters variation (payload). The two-link results of analysis are simulated. These simulation results are illustrated by using MATLAB® programming.

  9. Synthesis of robust nonlinear autopilots using differential game theory

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1991-01-01

    A synthesis technique for handling unmodeled disturbances in nonlinear control law synthesis was advanced using differential game theory. Two types of modeling inaccuracies can be included in the formulation. The first is a bias-type error, while the second is the scale-factor-type error in the control variables. The disturbances were assumed to satisfy an integral inequality constraint. Additionally, it was assumed that they act in such a way as to maximize a quadratic performance index. Expressions for optimal control and worst-case disturbance were then obtained using optimal control theory.

  10. Validating the Rett Syndrome Gross Motor Scale.

    PubMed

    Downs, Jenny; Stahlhut, Michelle; Wong, Kingsley; Syhler, Birgit; Bisgaard, Anne-Marie; Jacoby, Peter; Leonard, Helen

    2016-01-01

    Rett syndrome is a pervasive neurodevelopmental disorder associated with a pathogenic mutation on the MECP2 gene. Impaired movement is a fundamental component and the Rett Syndrome Gross Motor Scale was developed to measure gross motor abilities in this population. The current study investigated the validity and reliability of the Rett Syndrome Gross Motor Scale. Video data showing gross motor abilities supplemented with parent report data was collected for 255 girls and women registered with the Australian Rett Syndrome Database, and the factor structure and relationships between motor scores, age and genotype were investigated. Clinical assessment scores for 38 girls and women with Rett syndrome who attended the Danish Center for Rett Syndrome were used to assess consistency of measurement. Principal components analysis enabled the calculation of three factor scores: Sitting, Standing and Walking, and Challenge. Motor scores were poorer with increasing age and those with the p.Arg133Cys, p.Arg294* or p.Arg306Cys mutation achieved higher scores than those with a large deletion. The repeatability of clinical assessment was excellent (intraclass correlation coefficient for total score 0.99, 95% CI 0.93-0.98). The standard error of measurement for the total score was 2 points and we would be 95% confident that a change 4 points in the 45-point scale would be greater than within-subject measurement error. The Rett Syndrome Gross Motor Scale could be an appropriate measure of gross motor skills in clinical practice and clinical trials.

  11. Cross-validating a bidimensional mathematics anxiety scale.

    PubMed

    Haiyan Bai

    2011-03-01

    The psychometric properties of a 14-item bidimensional Mathematics Anxiety Scale-Revised (MAS-R) were empirically cross-validated with two independent samples consisting of 647 secondary school students. An exploratory factor analysis on the scale yielded strong construct validity with a clear two-factor structure. The results from a confirmatory factor analysis indicated an excellent model-fit (χ(2) = 98.32, df = 62; normed fit index = .92, comparative fit index = .97; root mean square error of approximation = .04). The internal consistency (.85), test-retest reliability (.71), interfactor correlation (.26, p < .001), and positive discrimination power indicated that MAS-R is a psychometrically reliable and valid instrument for measuring mathematics anxiety. Math anxiety, as measured by MAS-R, correlated negatively with student achievement scores (r = -.38), suggesting that MAS-R may be a useful tool for classroom teachers and other educational personnel tasked with identifying students at risk of reduced math achievement because of anxiety.

  12. A retrospective study on the incidences of adverse drug events and analysis of the contributing trigger factors

    PubMed Central

    Sam, Aaseer Thamby; Lian Jessica, Looi Li; Parasuraman, Subramani

    2015-01-01

    Objectives: To retrospectively determine the extent and types of adverse drug events (ADEs) from the patient cases sheets and identify the contributing factors of medication errors. To assess causality and severity using the World Health Organization (WHO) probability scale and Hartwig's scale, respectively. Methods: Hundred patient case sheets were randomly selected, modified version of the Institute for Healthcare Improvement (IHI) Global Trigger Tool was utilized to identify the ADEs; causality and severity were calculated utilizing the WHO probability scale and Hartwig's severity assessment scale, respectively. Results: In total, 153 adverse events (AEs) were identified using the IHI Global Trigger Tool. Majority of the AEs are due to medication errors (46.41%) followed by 60 adverse drug reactions (ADRs), 15 therapeutic failure incidents, and 7 over-dose cases. Out of the 153 AEs, 60 are due to ADRs such as rashes, nausea, and vomiting. Therapeutic failure contributes 9.80% of the AEs, while overdose contributes to 4.58% of the total 153 AEs. Using the trigger tools, we were able to detect 45 positive triggers in 36 patient records. Among it, 19 AEs were identified in 15 patient records. The percentage of AE/100 patients is 17%. The average ADEs/1000 doses is 2.03% (calculated). Conclusion: The IHI Global Trigger Tool is an effective method to aid provisionally-registered pharmacists to identify ADEs quicker. PMID:25767366

  13. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  14. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    PubMed Central

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-01-01

    Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics. PMID:16045803

  15. Evaluation of normalization methods for cDNA microarray data by k-NN classification.

    PubMed

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-07-26

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics.

  16. A proposed method to investigate reliability throughout a questionnaire

    PubMed Central

    2011-01-01

    Background Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. Methods A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. Results The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure - to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Conclusions Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales. PMID:21974842

  17. On the validity and robustness of the scale error phenomenon in early childhood.

    PubMed

    DeLoache, Judy S; LoBue, Vanessa; Vanderborght, Mieke; Chiong, Cynthia

    2013-02-01

    Scale errors is a term referring to very young children's serious efforts to perform actions on miniature replica objects that are impossible due to great differences in the size of the child's body and the size of the target objects. We report three studies providing further documentation of scale errors and investigating the validity and robustness of the phenomenon. In the first, we establish that 2-year-olds' behavior in response to prompts to "pretend" with miniature replica objects differs dramatically from scale errors. The second and third studies address the robustness of the phenomenon and its relative imperviousness to attempts to influence the rate of scale errors. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    PubMed

    Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg

    2012-01-01

    The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  19. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    PubMed

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  20. Simulation study of geometric shape factor approach to estimating earth emitted flux densities from wide field-of-view radiation measurements

    NASA Technical Reports Server (NTRS)

    Weaver, W. L.; Green, R. N.

    1980-01-01

    A study was performed on the use of geometric shape factors to estimate earth-emitted flux densities from radiation measurements with wide field-of-view flat-plate radiometers on satellites. Sets of simulated irradiance measurements were computed for unrestricted and restricted field-of-view detectors. In these simulations, the earth radiation field was modeled using data from Nimbus 2 and 3. Geometric shape factors were derived and applied to these data to estimate flux densities on global and zonal scales. For measurements at a satellite altitude of 600 km, estimates of zonal flux density were in error 1.0 to 1.2%, and global flux density errors were less than 0.2%. Estimates with unrestricted field-of-view detectors were about the same for Lambertian and non-Lambertian radiation models, but were affected by satellite altitude. The opposite was found for the restricted field-of-view detectors.

  1. A Method of Reducing Random Drift in the Combined Signal of an Array of Inertial Sensors

    DTIC Science & Technology

    2015-09-30

    stability of the collective output, Bayard et al, US Patent 6,882,964. The prior art methods rely upon the use of Kalman filtering and averaging...including scale-factor errors, quantization effects, temperature effects, random drift, and additive noise. A comprehensive account of all of these

  2. A Two-Factor Model Better Explains Heterogeneity in Negative Symptoms: Evidence from the Positive and Negative Syndrome Scale.

    PubMed

    Jang, Seon-Kyeong; Choi, Hye-Im; Park, Soohyun; Jaekal, Eunju; Lee, Ga-Young; Cho, Young Il; Choi, Kee-Hong

    2016-01-01

    Acknowledging separable factors underlying negative symptoms may lead to better understanding and treatment of negative symptoms in individuals with schizophrenia. The current study aimed to test whether the negative symptoms factor (NSF) of the Positive and Negative Syndrome Scale (PANSS) would be better represented by expressive and experiential deficit factors, rather than by a single factor model, using confirmatory factor analysis (CFA). Two hundred and twenty individuals with schizophrenia spectrum disorders completed the PANSS; subsamples additionally completed the Brief Negative Symptom Scale (BNSS) and the Motivation and Pleasure Scale-Self-Report (MAP-SR). CFA results indicated that the two-factor model fit the data better than the one-factor model; however, latent variables were closely correlated. The two-factor model's fit was significantly improved by accounting for correlated residuals between N2 (emotional withdrawal) and N6 (lack of spontaneity and flow of conversation), and between N4 (passive social withdrawal) and G16 (active social avoidance), possibly reflecting common method variance. The two NSF factors exhibited differential patterns of correlation with subdomains of the BNSS and MAP-SR. These results suggest that the PANSS NSF would be better represented by a two-factor model than by a single-factor one, and support the two-factor model's adequate criterion-related validity. Common method variance among several items may be a potential source of measurement error under a two-factor model of the PANSS NSF.

  3. OB Stars and Cepheids From the Gaia TGAS Catalogue: Test of their Distances and Proper Motions

    NASA Astrophysics Data System (ADS)

    Bobylev, Vadim V.; Bajkova, Anisa T.

    2017-12-01

    We consider young distant stars from the Gaia TGAS catalog. These are 250 classical Cepheids and 244 OB stars located at distances up to 4 kpc from the Sun. These stars are used to determine the Galactic rotation parameters using both trigonometric parallaxes and proper motions of the TGAS stars. In this case the considered stars have relative parallax errors less than 200%. Following the well-known statistical approach, we assume that the kinematic parameters found from the line-of-sight velocities Vr are less dependent on errors of distances than the found from the velocity components Vl. From values of the first derivative of the Galactic rotation angular velocity '0, found from the analysis of velocities Vr and Vl separately, the scale factor of distances is determined.We found that from the sample of Cepheids the scale of distances of the TGAS should be reduced by 3%, and from the sample of OB stars, on the contrary, the scale should be increased by 9%.

  4. Application of Improved 5th-Cubature Kalman Filter in Initial Strapdown Inertial Navigation System Alignment for Large Misalignment Angles

    PubMed Central

    Wang, Wei; Chen, Xiyuan

    2018-01-01

    In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm. PMID:29473912

  5. Comparison of Low Cost Photogrammetric Survey with Tls and Leica Pegasus Backpack 3d Modelss

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Piragnolo, M.; Vettore, A.

    2017-11-01

    This paper considers Leica backpack and photogrammetric surveys of a mediaeval bastion in Padua, Italy. Furhtermore, terrestrial laser scanning (TLS) survey is considered in order to provide a state of the art reconstruction of the bastion. Despite control points are typically used to avoid deformations in photogrammetric surveys and ensure correct scaling of the reconstruction, in this paper a different approach is considered: this work is part of a project aiming at the development of a system exploiting ultra-wide band (UWB) devices to provide correct scaling of the reconstruction. In particular, low cost Pozyx UWB devices are used to estimate camera positions during image acquisitions. Then, in order to obtain a metric reconstruction, scale factor in the photogrammetric survey is estimated by comparing camera positions obtained from UWB measurements with those obtained from photogrammetric reconstruction. Compared with the TLS survey, the considered photogrammetric model of the bastion results in a RMSE of 21.9cm, average error 13.4cm, and standard deviation 13.5cm. Excluding the final part of the bastion left wing, where the presence of several poles make reconstruction more difficult, (RMSE) fitting error is 17.3cm, average error 11.5cm, and standard deviation 9.5cm. Instead, comparison of Leica backpack and TLS surveys leads to an average error of 4.7cm and standard deviation 0.6cm (4.2cm and 0.3cm, respectively, by excluding the final part of the left wing).

  6. Psychometric assessment of the processes of change scale for sun protection.

    PubMed

    Sillice, Marie A; Babbin, Steven F; Redding, Colleen A; Rossi, Joseph S; Paiva, Andrea L; Velicer, Wayne F

    2018-01-01

    The fourteen-factor Processes of Change Scale for Sun Protection assesses behavioral and experiential strategies that underlie the process of sun protection acquisition and maintenance. Variations of this measure have been used effectively in several randomized sun protection trials, both for evaluation and as a basis for intervention. However, there are no published studies, to date, that evaluate the psychometric properties of the scale. The present study evaluated factorial invariance and scale reliability in a national sample (N = 1360) of adults involved in a Transtheoretical model tailored intervention for exercise and sun protection, at baseline. Invariance testing ranged from least to most restrictive: Configural Invariance (constraints only factor structure and zero loadings); Pattern Identity Invariance (equal factor loadings across target groups); and Strong Factorial Invariance (equal factor loadings and measurement errors). Multi-sample structural equation modeling tested the invariance of the measurement model across seven subgroups: age, education, ethnicity, gender, race, skin tone, and Stage of Change for Sun Protection. Strong factorial invariance was found across all subgroups. Internal consistency coefficient Alpha and factor rho reliability, respectively, were .83 and .80 for behavioral processes, .91 and .89 for experiential processes, and .93 and .91 for the global scale. These results provide strong empirical evidence that the scale is consistent, has internal validity and can be used in research interventions with population-based adult samples.

  7. An extended linear scaling method for downscaling temperature and its implication in the Jhelum River basin, Pakistan, and India, using CMIP5 GCMs

    NASA Astrophysics Data System (ADS)

    Mahmood, Rashid; JIA, Shaofeng

    2017-11-01

    In this study, the linear scaling method used for the downscaling of temperature was extended from monthly scaling factors to daily scaling factors (SFs) to improve the daily variations in the corrected temperature. In the original linear scaling (OLS), mean monthly SFs are used to correct the future data, but mean daily SFs are used to correct the future data in the extended linear scaling (ELS) method. The proposed method was evaluated in the Jhelum River basin for the period 1986-2000, using the observed maximum temperature (Tmax) and minimum temperature (Tmin) of 18 climate stations and the simulated Tmax and Tmin of five global climate models (GCMs) (GFDL-ESM2G, NorESM1-ME, HadGEM2-ES, MIROC5, and CanESM2), and the method was also compared with OLS to observe the improvement. Before the evaluation of ELS, these GCMs were also evaluated using their raw data against the observed data for the same period (1986-2000). Four statistical indicators, i.e., error in mean, error in standard deviation, root mean square error, and correlation coefficient, were used for the evaluation process. The evaluation results with GCMs' raw data showed that GFDL-ESM2G and MIROC5 performed better than other GCMs according to all the indicators but with unsatisfactory results that confine their direct application in the basin. Nevertheless, after the correction with ELS, a noticeable improvement was observed in all the indicators except correlation coefficient because this method only adjusts (corrects) the magnitude. It was also noticed that the daily variations of the observed data were better captured by the corrected data with ELS than OLS. Finally, the ELS method was applied for the downscaling of five GCMs' Tmax and Tmin for the period of 2041-2070 under RCP8.5 in the Jhelum basin. The results showed that the basin would face hotter climate in the future relative to the present climate, which may result in increasing water requirements in public, industrial, and agriculture sectors; change in the hydrological cycle and monsoon pattern; and lack of glaciers in the basin.

  8. The simple procedure for the fluxgate magnetometers calibration

    NASA Astrophysics Data System (ADS)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the Coil Calibration system reveals, that the achieved accuracy (<0.04 % for scale factors and 0.03 degrees of arc for angle errors) is sufficient for many applications, particularly for satisfying the INTERMAGNET requirements to 1-second instruments.

  9. Distinguishing Error from Chaos in Ecological Time Series

    NASA Astrophysics Data System (ADS)

    Sugihara, George; Grenfell, Bryan; May, Robert M.

    1990-11-01

    Over the years, there has been much discussion about the relative importance of environmental and biological factors in regulating natural populations. Often it is thought that environmental factors are associated with stochastic fluctuations in population density, and biological ones with deterministic regulation. We revisit these ideas in the light of recent work on chaos and nonlinear systems. We show that completely deterministic regulatory factors can lead to apparently random fluctuations in population density, and we then develop a new method (that can be applied to limited data sets) to make practical distinctions between apparently noisy dynamics produced by low-dimensional chaos and population variation that in fact derives from random (high-dimensional)noise, such as environmental stochasticity or sampling error. To show its practical use, the method is first applied to models where the dynamics are known. We then apply the method to several sets of real data, including newly analysed data on the incidence of measles in the United Kingdom. Here the additional problems of secular trends and spatial effects are explored. In particular, we find that on a city-by-city scale measles exhibits low-dimensional chaos (as has previously been found for measles in New York City), whereas on a larger, country-wide scale the dynamics appear as a noisy two-year cycle. In addition to shedding light on the basic dynamics of some nonlinear biological systems, this work dramatizes how the scale on which data is collected and analysed can affect the conclusions drawn.

  10. More Thoughts on AG-SG Comparisons and SG Scale Factor Determinations

    NASA Astrophysics Data System (ADS)

    Crossley, David; Calvo, Marta; Rosat, Severine; Hinderer, Jacques

    2018-05-01

    We revisit a number of details that arise when doing joint AG-SG (absolute gravimeter-superconducting gravimeter) calibrations, focusing on the scale factor determination and the AG mean value that derives from the offset. When fitting SG data to AG data, the choice of which time span to use for the SG data can make a difference, as well as the inclusion of a trend that might be present in the fitting. The SG time delay has only a small effect. We review a number of options discussed recently in the literature on whether drops or sets provide the most accurate scale factor, and how to reject drops and sets to get the most consistent result. Two effects are clearly indicated by our tests, one being to smooth the raw SG 1 s (or similar sampling interval) data for times that coincide with AG drops, the other being a second pass in processing to reject residual outliers after the initial fit. Although drops can usefully provide smaller SG calibration errors compared to using set data, set values are more robust to data problems but one has to use the standard error to avoid large uncertainties. When combining scale factor determinations for the same SG at the same station, the expected gradual reduction of the error with each new experiment is consistent with the method of conflation. This is valid even when the SG data acquisition system is changed, or different AG's are used. We also find a relationship between the AG mean values obtained from SG to AG fits with the traditional short-term AG (`site') measurements usually done with shorter datasets. This involves different zero levels and corrections in the AG versus SG processing. Without using the Micro-g FG5 software it is possible to use the SG-derived corrections for tides, barometric pressure, and polar motion to convert an AG-SG calibration experiment into a site measurement (and vice versa). Finally, we provide a simple method for AG users who do not have the FG5-software to find an internal FG5 parameter that allows us to convert AG values between different transfer heights when there is a change in gradient.

  11. More Thoughts on AG-SG Comparisons and SG Scale Factor Determinations

    NASA Astrophysics Data System (ADS)

    Crossley, David; Calvo, Marta; Rosat, Severine; Hinderer, Jacques

    2018-03-01

    We revisit a number of details that arise when doing joint AG-SG (absolute gravimeter-superconducting gravimeter) calibrations, focusing on the scale factor determination and the AG mean value that derives from the offset. When fitting SG data to AG data, the choice of which time span to use for the SG data can make a difference, as well as the inclusion of a trend that might be present in the fitting. The SG time delay has only a small effect. We review a number of options discussed recently in the literature on whether drops or sets provide the most accurate scale factor, and how to reject drops and sets to get the most consistent result. Two effects are clearly indicated by our tests, one being to smooth the raw SG 1 s (or similar sampling interval) data for times that coincide with AG drops, the other being a second pass in processing to reject residual outliers after the initial fit. Although drops can usefully provide smaller SG calibration errors compared to using set data, set values are more robust to data problems but one has to use the standard error to avoid large uncertainties. When combining scale factor determinations for the same SG at the same station, the expected gradual reduction of the error with each new experiment is consistent with the method of conflation. This is valid even when the SG data acquisition system is changed, or different AG's are used. We also find a relationship between the AG mean values obtained from SG to AG fits with the traditional short-term AG (`site') measurements usually done with shorter datasets. This involves different zero levels and corrections in the AG versus SG processing. Without using the Micro-g FG5 software it is possible to use the SG-derived corrections for tides, barometric pressure, and polar motion to convert an AG-SG calibration experiment into a site measurement (and vice versa). Finally, we provide a simple method for AG users who do not have the FG5-software to find an internal FG5 parameter that allows us to convert AG values between different transfer heights when there is a change in gradient.

  12. Decreasing scoring errors on Wechsler Scale Vocabulary, Comprehension, and Similarities subtests: a preliminary study.

    PubMed

    Linger, Michele L; Ray, Glen E; Zachar, Peter; Underhill, Andrea T; LoBello, Steven G

    2007-10-01

    Studies of graduate students learning to administer the Wechsler scales have generally shown that training is not associated with the development of scoring proficiency. Many studies report on the reduction of aggregated administration and scoring errors, a strategy that does not highlight the reduction of errors on subtests identified as most prone to error. This study evaluated the development of scoring proficiency specifically on the Wechsler (WISC-IV and WAIS-III) Vocabulary, Comprehension, and Similarities subtests during training by comparing a set of 'early test administrations' to 'later test administrations.' Twelve graduate students enrolled in an intelligence-testing course participated in the study. Scoring errors (e.g., incorrect point assignment) were evaluated on the students' actual practice administration test protocols. Errors on all three subtests declined significantly when scoring errors on 'early' sets of Wechsler scales were compared to those made on 'later' sets. However, correcting these subtest scoring errors did not cause significant changes in subtest scaled scores. Implications for clinical instruction and future research are discussed.

  13. Development of a refractive error quality of life scale for Thai adults (the REQ-Thai).

    PubMed

    Sukhawarn, Roongthip; Wiratchai, Nonglak; Tatsanavivat, Pyatat; Pitiyanuwat, Somwung; Kanato, Manop; Srivannaboon, Sabong; Guyatt, Gordon H

    2011-08-01

    To develop a scale for measuring refractive error quality of life (QOL) for Thai adults. The full survey comprised 424 respondents from 5 medical centers in Bangkok and from 3 medical centers in Chiangmai, Songkla and KhonKaen provinces. Participants were emmetropes and persons with refractive correction with visual acuity of 20/30 or better An item reduction process was employed by combining 3 methods-expert opinion, impact method and item-total correlation methods. The classical reliability testing and the validity testing including convergent, discriminative and construct validity was performed. The developed questionnaire comprised 87 items in 6 dimensions: 1) quality of vision, 2) visual function, 3) social function, 4) psychological function, 5) symptoms and 6) refractive correction problems. It is the 5-level Likert scale type. The Cronbach's Alpha coefficients of its dimensions ranged from 0.756 to 0. 979. All validity testing were shown to be valid. The construct validity was validated by the confirmatory factor analysis. A short version questionnaire comprised 48 items with good reliability and validity was also developed. This is the first validated instrument for measuring refractive error quality of life for Thai adults that was developed with strong research methodology and large sample size.

  14. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices.

    PubMed

    Marathe, A R; Taylor, D M

    2015-08-01

    Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  15. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices

    NASA Astrophysics Data System (ADS)

    Marathe, A. R.; Taylor, D. M.

    2015-08-01

    Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  16. Compatibility check of measured aircraft responses using kinematic equations and extended Kalman filter

    NASA Technical Reports Server (NTRS)

    Klein, V.; Schiess, J. R.

    1977-01-01

    An extended Kalman filter smoother and a fixed point smoother were used for estimation of the state variables in the six degree of freedom kinematic equations relating measured aircraft responses and for estimation of unknown constant bias and scale factor errors in measured data. The computing algorithm includes an analysis of residuals which can improve the filter performance and provide estimates of measurement noise characteristics for some aircraft output variables. The technique developed was demonstrated using simulated and real flight test data. Improved accuracy of measured data was obtained when the data were corrected for estimated bias errors.

  17. Modal energy analysis for mechanical systems excited by spatially correlated loads

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Fei, Qingguo; Li, Yanbin; Wu, Shaoqing; Chen, Qiang

    2018-10-01

    MODal ENergy Analysis (MODENA) is an energy-based method, which is proposed to deal with vibroacoustic problems. The performance of MODENA on the energy analysis of a mechanical system under spatially correlated excitation is investigated. A plate/cavity coupling system excited by a pressure field is studied in a numerical example, in which four kinds of pressure fields are involved, which include the purely random pressure field, the perfectly correlated pressure field, the incident diffuse field, and the turbulent boundary layer pressure fluctuation. The total energies of subsystems differ to reference solution only in the case of purely random pressure field and only for the non-excited subsystem (the cavity). A deeper analysis on the scale of modal energy is further conducted via another numerical example, in which two structural modes excited by correlated forces are coupled with one acoustic mode. A dimensionless correlation strength factor is proposed to determine the correlation strength between modal forces. Results show that the error on modal energy increases with the increment of the correlation strength factor. A criterion is proposed to establish a link between the error and the correlation strength factor. According to the criterion, the error is negligible when the correlation strength is weak, in this situation the correlation strength factor is less than a critical value.

  18. JY1 time scale: a new Kalman-filter time scale designed at NIST

    NASA Astrophysics Data System (ADS)

    Yao, Jian; Parker, Thomas E.; Levine, Judah

    2017-11-01

    We report on a new Kalman-filter hydrogen-maser time scale (i.e. JY1 time scale) designed at the National Institute of Standards and Technology (NIST). The JY1 time scale is composed of a few hydrogen masers and a commercial Cs clock. The Cs clock is used as a reference clock to ease operations with existing data. Unlike other time scales, the JY1 time scale uses three basic time-scale equations, instead of only one equation. Also, this time scale can detect a clock error (i.e. time error, frequency error, or frequency drift error) automatically. These features make the JY1 time scale stiff and less likely to be affected by an abnormal clock. Tests show that the JY1 time scale deviates from the UTC by less than  ±5 ns for ~100 d, when the time scale is initially aligned to the UTC and then is completely free running. Once the time scale is steered to a Cs fountain, it can maintain the time with little error even if the Cs fountain stops working for tens of days. This can be helpful when we do not have a continuously operated fountain or when the continuously operated fountain accidentally stops, or when optical clocks run occasionally.

  19. Growth models and the expected distribution of fluctuating asymmetry

    USGS Publications Warehouse

    Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John

    2003-01-01

    Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.

  20. Ideal, nonideal, and no-marker variables: The confirmatory factor analysis (CFA) marker technique works when it matters.

    PubMed

    Williams, Larry J; O'Boyle, Ernest H

    2015-09-01

    A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).

  1. Investigating different filter and rescaling methods on simulated GRACE-like TWS variations for hydrological applications

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjing; Dobslaw, Henryk; Dahle, Christoph; Thomas, Maik; Neumayer, Karl-Hans; Flechtner, Frank

    2017-04-01

    By operating for more than one decade now, the GRACE satellite provides valuable information on the total water storage (TWS) for hydrological and hydro-meteorological applications. The increasing interest in use of the GRACE-based TWS requires an in-depth assessment of the reliability of the outputs and also its uncertainties. Through years of development, different post-processing methods have been suggested for TWS estimation. However, since GRACE offers an unique way to provide high spatial and temporal scale TWS, there is no global ground truth data available to fully validate the results. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-type gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Three non-isotropic filter methods from Kusche (2007) and a combined filter from DDK1 and DDK3 based on the ground tracks are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-type TWS estimates to correct the bias and leakage. Time variant rescaling factors as monthly scaling factors and scaling factors for seasonal and long-term variations separately are investigated as well. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment (Zhang et al., 2016) and will subsequently recommend a processing strategy that shall also be applied for planned GRACE and GRACE-FO Level-3 products for terrestrial applications provided by GFZ. Kusche, J., 2007:Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Zhang L, Dobslaw H, Thomas M (2016) Globally gridded terrestrial water storage variations from GRACE satellite gravimetry for hydrometeorological applications. Geophysical Journal International 206(1):368-378, DOI 10.1093/gji/ggw153.

  2. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  3. A Distribution-Free Description of Fragmentation by Blasting Based on Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Sanchidrián, José A.; Ouchterlony, Finn

    2017-04-01

    A model for fragmentation in bench blasting is developed from dimensional analysis adapted from asteroid collision theory, to which two factors have been added: one describing the discontinuities spacing and orientation and another the delay between successive contiguous shots. The formulae are calibrated by nonlinear fits to 169 bench blasts in different sites and rock types, bench geometries and delay times, for which the blast design data and the size distributions of the muckpile obtained by sieving were available. Percentile sizes of the fragments distribution are obtained as the product of a rock mass structural factor, a rock strength-to-explosive energy ratio, a bench shape factor, a scale factor or characteristic size and a function of the in-row delay. The rock structure is described by means of the joints' mean spacing and orientation with respect to the free face. The strength property chosen is the strain energy at rupture that, together with the explosive energy density, forms a combined rock strength/explosive energy factor. The model is applicable from 5 to 100 percentile sizes, with all parameters determined from the fits significant to a 0.05 level. The expected error of the prediction is below 25% at any percentile. These errors are half to one-third of the errors expected with the best prediction models available to date.

  4. Solving the puzzle of discrepant quasar variability on monthly time-scales implied by SDSS and CRTS data sets

    NASA Astrophysics Data System (ADS)

    Suberlak, Krzysztof; Ivezić, Željko; MacLeod, Chelsea L.; Graham, Matthew; Sesar, Branimir

    2017-12-01

    We present an improved photometric error analysis for the 7 100 CRTS (Catalina Real-Time Transient Survey) optical light curves for quasars from the SDSS (Sloan Digital Sky Survey) Stripe 82 catalogue. The SDSS imaging survey has provided a time-resolved photometric data set, which greatly improved our understanding of the quasar optical continuum variability: Data for monthly and longer time-scales are consistent with a damped random walk (DRW). Recently, newer data obtained by CRTS provided puzzling evidence for enhanced variability, compared to SDSS results, on monthly time-scales. Quantitatively, SDSS results predict about 0.06 mag root-mean-square (rms) variability for monthly time-scales, while CRTS data show about a factor of 2 larger rms, for spectroscopically confirmed SDSS quasars. Our analysis has successfully resolved this discrepancy as due to slightly underestimated photometric uncertainties from the CRTS image processing pipelines. As a result, the correction for observational noise is too small and the implied quasar variability is too large. The CRTS photometric error correction factors, derived from detailed analysis of non-variable SDSS standard stars that were re-observed by CRTS, are about 20-30 per cent, and result in reconciling quasar variability behaviour implied by the CRTS data with earlier SDSS results. An additional analysis based on independent light curve data for the same objects obtained by the Palomar Transient Factory provides further support for this conclusion. In summary, the quasar variability constraints on weekly and monthly time-scales from SDSS, CRTS and PTF surveys are mutually compatible, as well as consistent with DRW model.

  5. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  6. Error Model and Compensation of Bell-Shaped Vibratory Gyro

    PubMed Central

    Su, Zhong; Liu, Ning; Li, Qing

    2015-01-01

    A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593

  7. Identification of Carbon loss in the production of pilot-scale Carbon nanotube using gauze reactor

    NASA Astrophysics Data System (ADS)

    Wulan, P. P. D. K.; Purwanto, W. W.; Yeni, N.; Lestari, Y. D.

    2018-03-01

    Carbon loss more than 65% was the major obstacles in the Carbon Nanotube (CNT) production using gauze pilot scale reactor. The results showed that the initial carbon loss calculation is 27.64%. The calculation of carbon loss, then, takes place with various corrections parameters of: product flow rate error measurement, feed flow rate changes, gas product composition by Gas Chromatography Flame Ionization Detector (GC FID), and the carbon particulate by glass fiber filters. Error of product flow rate due to the measurement with bubble soap gives calculation error of carbon loss for about ± 4.14%. Changes in the feed flow rate due to CNT growth in the reactor reduce carbon loss by 4.97%. The detection of secondary hydrocarbon with GC FID during CNT production process reduces carbon loss by 5.14%. Particulates carried by product stream are very few and merely correct the carbon loss about 0.05%. Taking all the factors into account, the amount of carbon loss within this study is (17.21 ± 4.14)%. Assuming that 4.14% of carbon loss is due to the error measurement of product flow rate, the amount of carbon loss is 13.07%. It means that more than 57% of carbon loss within this study is identified.

  8. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less

  9. A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration

    NASA Astrophysics Data System (ADS)

    Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2018-05-01

    Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.

  10. The psychometric properties of the Perceived Stress Scale-10 among patients with systemic lupus erythematosus.

    PubMed

    Mills, S D; Azizoddin, D; Racaza, G Z; Wallace, D J; Weisman, M H; Nicassio, P M

    2017-10-01

    Objective Systemic lupus erythematosus (SLE) is a chronic, multisystem autoimmune disease characterized by periods of remission and recurrent flares, which have been associated with stress. Despite the significance of stress in this disease, the Perceived Stress Scale-10 has yet to be psychometrically evaluated in patients with SLE. Methods Exploratory factor analysis was used to examine the structural validity of the Perceived Stress Scale-10 among patients with SLE ( N = 138) receiving medical care at Cedars Sinai Medical Center. Cronbach's coefficient alpha was used to examine internal consistency reliability, and Pearson product-moment correlations were used to examine convergent validity with measures of anxiety, depression, helplessness, and disease activity. Results Exploratory factor analysis provided support for a two-factor structure (comparative fit index = .95; standardized root mean residual = .04; root mean square error of approximation = .08). Internal consistency reliability was good for both factors (α = .84 and .86). Convergent validity was evidenced via significant correlations with measures of anxiety, depression, and helplessness. There were no significant correlations with the measure of disease activity. Conclusion The Perceived Stress Scale-10 can be used to examine perceived stress among patients with SLE.

  11. Examining the Factor Structure and Reliability of the Safe Patient Handling Perception Scale: An Initial Validation Study.

    PubMed

    White-Heisel, Regina; Canfield, James P; Young-Hughes, Sadie

    Perceiving imminent safe patient handling and movement (SPH&M) dangers may reduce musculoskeletal (MSK) injuries for nurses in the workplace. The purpose of this study is to develop and validate the 17-item Safe Patient Handling Perception Scale (SPHPS) as an evaluation instrument assessing perceptual risk of MSK injury based on SPH&M knowledge, practice, and resource accessibility in the workplace. Data were collected from a convenience sample (N = 117) of nursing employees at a Veteran Affairs Medical Center. Factor analysis identified three factors: knowledge, practice, and accessibility. The SPHPS demonstrated high levels of reliability, supported by acceptable alpha scores (SPHM knowledge [α = .866], SPHM practices [α = .901], and access to SPHM resources [α = .855]), in addition to the relatively low standard error of measurement scores (SEM). The study outcomes suggest that the SPHPS is a valid and reliable tool that can measure participants' perceived risk factors for MSK injuries.

  12. Development of a scale to measure consumer perception of the risks involved in consuming raw vegetable salad in full-service restaurants.

    PubMed

    Danelon, Mariana Schievano; Salay, Elisabete

    2012-12-01

    The importance of the number of meals taken away-from-home represents an opportunity to promote consumption of vegetables in this context. However, the perception of risk may interfere with the food consumption behavior. The objective of this research was to develop a scale to measure consumer perception of the risks involved in consuming raw vegetable salad in full-service restaurants. The following research steps were carried out: item elaboration; content validity; scale purification (item-total correlation, internal consistency and exploratory factor analysis); and construct validity (confirmatory factor analysis). Non-probabilistic samples of consumers were interviewed (a total of 672 individuals) in the city of Campinas, Brazil. Several analyses were carried out using the Predictive Analytics Software 18.0 and LISREL 8.80. The final scale contained 26 items with an adequate content validity index (0.97) and Cronbach's alpha coefficient (0.93). The confirmatory factor analysis validates a six risk type factor model: physical, psychological, social, time, financial and performance (chi-square/degrees of freedom=2.29, root mean square error of approximation - RMSEA=0.060 and comparative fit index - CFI=0.98). The scale developed presented satisfactory reliability and validity results and could therefore be employed in further studies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. The Iatroref study: medical errors are associated with symptoms of depression in ICU staff but not burnout or safety culture.

    PubMed

    Garrouste-Orgeas, Maité; Perrin, Marion; Soufir, Lilia; Vesin, Aurélien; Blot, François; Maxime, Virginie; Beuret, Pascal; Troché, Gilles; Klouche, Kada; Argaud, Laurent; Azoulay, Elie; Timsit, Jean-François

    2015-02-01

    Staff behaviours to optimise patient safety may be influenced by burnout, depression and strength of the safety culture. We evaluated whether burnout, symptoms of depression and safety culture affected the frequency of medical errors and adverse events (selected using Delphi techniques) in ICUs. Prospective, observational, multicentre (31 ICUs) study from August 2009 to December 2011. Burnout, depression symptoms and safety culture were evaluated using the Maslach Burnout Inventory (MBI), CES-Depression scale and Safety Attitudes Questionnaire, respectively. Of 1,988 staff members, 1,534 (77.2 %) participated. Frequencies of medical errors and adverse events were 804.5/1,000 and 167.4/1,000 patient-days, respectively. Burnout prevalence was 3 or 40 % depending on the definition (severe emotional exhaustion, depersonalisation and low personal accomplishment; or MBI score greater than -9). Depression symptoms were identified in 62/330 (18.8 %) physicians and 188/1,204 (15.6 %) nurses/nursing assistants. Median safety culture score was 60.7/100 [56.8-64.7] in physicians and 57.5/100 [52.4-61.9] in nurses/nursing assistants. Depression symptoms were an independent risk factor for medical errors. Burnout was not associated with medical errors. The safety culture score had a limited influence on medical errors. Other independent risk factors for medical errors or adverse events were related to ICU organisation (40 % of ICU staff off work on the previous day), staff (specific safety training) and patients (workload). One-on-one training of junior physicians during duties and existence of a hospital risk-management unit were associated with lower risks. The frequency of selected medical errors in ICUs was high and was increased when staff members had symptoms of depression.

  14. Reliability and Validity of the Physical Education Activities Scale.

    PubMed

    Thomason, Diane L; Feng, Du

    2016-06-01

    Measuring adolescent perceptions of physical education (PE) activities is necessary in understanding determinants of school PE activity participation. This study assessed reliability and validity of the Physical Education Activities Scale (PEAS), a 41-item visual analog scale measuring high school adolescent perceptions of school PE activity participation. Adolescents (N = 529) from the Pacific Northwest aged 15-19 in grades 9-12 participated in the study. Construct validity was assessed using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Measurement invariance across sex groups was tested by multiple-group CFA. Internal consistency reliability was analyzed using Cronbach's alpha. Inter-subscale correlations (Pearson's r) were calculated for latent factors and observed subscale scores. Exploratory factor analysis suggested a 3-factor solution explaining 43.4% of the total variance. Confirmatory factor analysis showed the 3-factor model fit the data adequately (comparative fit index [CFI] = 0.90, Tucker-Lewis index [TLI] = 0.89, root mean squared error of approximation [RMSEA] = 0.063). Factorial invariance was supported. Cronbach's alpha of the total PEAS was α = 0.92, and for subscales α ranged from 0.65 to 0.92. Independent t-tests showed significantly higher mean scores for boys than girls on the total scale and all subscales. Findings provide psychometric support for using the PEAS for examining adolescent's psychosocial and environmental perceptions to participating in PE activities. © 2016, American School Health Association.

  15. AQMEII3: the EU and NA regional scale program of the ...

    EPA Pesticide Factsheets

    The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur

  16. Relationships between evidence-based practice, quality improvement and clinical error experience of nurses in Korean hospitals.

    PubMed

    Hwang, Jee-In; Park, Hyeoun-Ae

    2015-07-01

    This study investigated individual and work-related factors associated with nurses' perceptions of evidence-based practice (EBP) and quality improvement (QI), and the relationships between evidence-based practice, quality improvement and clinical errors. Understanding the factors affecting evidence-based practice and quality improvement activities and their relationships with clinical errors is important for designing strategies to promote evidence-based practice, quality improvement and patient safety. A cross-sectional survey was conducted with 594 nurses in two Korean teaching hospitals using the evidence-based practice Questionnaire and quality improvement scale developed in this study. Four hundred and forty-three nurses (74.6%) returned the completed survey. Nurses' ages and educational levels were significantly associated with evidence-based practice scores whereas age and job position were associated with quality improvement scores. There were positive, moderate correlations between evidence-based practice and quality improvement scores. Nurses who had not made any clinical errors during the past 12 months had significantly higher quality improvement skills scores than those who had. The findings indicated the necessity of educational support regarding evidence-based practice and quality improvement for younger staff nurses who have no master degrees. Enhancing quality improvement skills may reduce clinical errors. Nurse managers should consider the characteristics of their staff when implementing educational and clinical strategies for evidence-based practice and quality improvement. © 2013 John Wiley & Sons Ltd.

  17. Developing a Psychometric Instrument to Measure Physical Education Teachers' Job Demands and Resources.

    PubMed

    Zhang, Tan; Chen, Ang

    2017-01-01

    Based on the job demands-resources model, the study developed and validated an instrument that measures physical education teachers' job demands-resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample ( n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from -.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers' perception of their working environment.

  18. Developing a Psychometric Instrument to Measure Physical Education Teachers’ Job Demands and Resources

    PubMed Central

    Zhang, Tan; Chen, Ang

    2017-01-01

    Based on the job demands–resources model, the study developed and validated an instrument that measures physical education teachers’ job demands–resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample (n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from −.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers’ perception of their working environment. PMID:29200808

  19. Evaluation and error apportionment of an ensemble of ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact

  20. Validation of a pre-existing safety climate scale for the Turkish furniture manufacturing industry.

    PubMed

    Akyuz, Kadri Cemil; Yildirim, Ibrahim; Gungor, Celal

    2018-03-22

    Understanding the safety climate level is essential to implement a proactive safety program. The objective of this study is to explore the possibility of having a safety climate scale for the Turkish furniture manufacturing industry since there has not been any scale available. The questionnaire recruited 783 subjects. Confirmatory factor analysis (CFA) tested a pre-existing safety scale's fit to the industry. The CFA indicated that the structures of the model present a non-satisfactory fit with the data (χ 2  = 2033.4, df = 314, p ≤ 0.001; root mean square error of approximation = 0.08, normed fit index = 0.65, Tucker-Lewis index = 0.65, comparative fit index = 0.69, parsimony goodness-of-fit index = 0.68). The results suggest that a new scale should be developed and validated to measure the safety climate level in the Turkish furniture manufacturing industry. Due to the hierarchical structure of organizations, future studies should consider a multilevel approach in their exploratory factor analyses while developing a new scale.

  1. Pattern recognition invariant under changes of scale and orientation

    NASA Astrophysics Data System (ADS)

    Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain

    1997-08-01

    We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.

  2. A Comparison of Three Methods for Computing Scale Score Conditional Standard Errors of Measurement. ACT Research Report Series, 2013 (7)

    ERIC Educational Resources Information Center

    Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu

    2013-01-01

    Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…

  3. Development of a scale of executive functioning for the RBANS.

    PubMed

    Spencer, Robert J; Kitchen Andren, Katherine A; Tolle, Kathryn A

    2018-01-01

    The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) is a cognitive battery that contains scales of several cognitive abilities, but no scale in the instrument is exclusively dedicated to executive functioning. Although the subtests allow for observation of executive-type errors, each error is of fairly low base rate, and healthy and clinical normative data are lacking on the frequency of these types of errors, making their significance difficult to interpret in isolation. The aim of this project was to create an RBANS executive errors scale (RBANS EE) with items comprised of qualitatively dysexecutive errors committed throughout the test. Participants included Veterans referred for outpatient neuropsychological testing. Items were initially selected based on theoretical literature and were retained based on item-total correlations. The RBANS EE (a percentage calculated by dividing the number of dysexecutive errors by the total number of responses) was moderately related to each of seven established measures of executive functioning and was strongly predictive of dichotomous classification of executive impairment. Thus, the scale had solid concurrent validity, justifying its use as a supplementary scale. The RBANS EE requires no additional administration time and can provide a quantified measure of otherwise unmeasured aspects of executive functioning.

  4. An application of the driver behavior questionnaire to Chinese carless young drivers.

    PubMed

    Zhang, Qian; Jiang, Zuhua; Zheng, Dongpeng; Wang, Yifan; Man, Dong

    2013-01-01

    Carless young drivers refers to those drivers aged between 18 and 25 years who have a driver's license but seldom have opportunities to practice their driving skills because they do not have their own cars. Due to China's lower private car ownership, many young drivers turn into carless young drivers after licensure, and the safety issue associated with them has become a matter of great concern in China. Because few studies have examined the driving behaviors of these drivers, this study aims to utilize the Driver Behavior Questionnaire (DBQ) to investigate the self-reported driving behaviors of Chinese carless young drivers. A total of 523 Chinese carless young drivers (214 females, 309 males) with an average age of 21.91 years completed a questionnaire including the 27-item DBQ and demographics. The data were first randomized into 2 subsamples for factor analysis and then combined together for the following analyses. Both an exploratory factor analysis (EFA, n = 174) and a confirmatory factor analysis (CFA, n = 349) were performed to investigate the factor structure of the DBQ. Correlation analysis was conducted to examine the relationships between the demographics and the DBQ scales' variables. Multivariate linear regression and logistic regression were performed to investigate the prediction of the DBQ scales and crash involvement in the previous year. The EFA produced a 4-factor structure identified as errors, violations, attention lapses, and memory lapses, and the CFA revealed a good model fit after the removal of one item with a low factor loading and the permission of the error covariance between some items. The Chinese carless young drivers reported a comparatively low level of aberrant driving behaviors. The 3 most frequently reported behaviors were all lapses and the 3 least were all violations. Gender was the only significant predictor of the 2 lapses scales and lifetime mileage was the only significant predictor of the violations scale. Only the violations factor was found to be significantly predictive of crash involvement in the previous year. The current study provides evidence that the DBQ can successfully be utilized to examine the self-reported driving behaviors of Chinese carless young drivers. However, the factor structure as well as the level of reported aberrant driving behaviors suggests that Chinese carless young drivers are a special population and thus should be treated differently when interventions are performed. Supplemental materials are available for this article.

  5. Downscaling Land Surface Temperature in Complex Regions by Using Multiple Scale Factors with Adaptive Thresholds

    PubMed Central

    Yang, Yingbao; Li, Xiaolong; Pan, Xin; Zhang, Yong; Cao, Chen

    2017-01-01

    Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. PMID:28368301

  6. Dynamical Mass Measurements of Contaminated Galaxy Clusters Using Support Distribution Machines

    NASA Astrophysics Data System (ADS)

    Ntampaka, Michelle; Trac, Hy; Sutherland, Dougal; Fromenteau, Sebastien; Poczos, Barnabas; Schneider, Jeff

    2018-01-01

    We study dynamical mass measurements of galaxy clusters contaminated by interlopers and show that a modern machine learning (ML) algorithm can predict masses by better than a factor of two compared to a standard scaling relation approach. We create two mock catalogs from Multidark’s publicly available N-body MDPL1 simulation, one with perfect galaxy cluster membership infor- mation and the other where a simple cylindrical cut around the cluster center allows interlopers to contaminate the clusters. In the standard approach, we use a power-law scaling relation to infer cluster mass from galaxy line-of-sight (LOS) velocity dispersion. Assuming perfect membership knowledge, this unrealistic case produces a wide fractional mass error distribution, with a width E=0.87. Interlopers introduce additional scatter, significantly widening the error distribution further (E=2.13). We employ the support distribution machine (SDM) class of algorithms to learn from distributions of data to predict single values. Applied to distributions of galaxy observables such as LOS velocity and projected distance from the cluster center, SDM yields better than a factor-of-two improvement (E=0.67) for the contaminated case. Remarkably, SDM applied to contaminated clusters is better able to recover masses than even the scaling relation approach applied to uncon- taminated clusters. We show that the SDM method more accurately reproduces the cluster mass function, making it a valuable tool for employing cluster observations to evaluate cosmological models.

  7. Universal Capacitance Model for Real-Time Biomass in Cell Culture.

    PubMed

    Konakovsky, Viktor; Yagtu, Ali Civan; Clemens, Christoph; Müller, Markus Michael; Berger, Martina; Schlatter, Stefan; Herwig, Christoph

    2015-09-02

    : Capacitance probes have the potential to revolutionize bioprocess control due to their safe and robust use and ability to detect even the smallest capacitors in the form of biological cells. Several techniques have evolved to model biomass statistically, however, there are problems with model transfer between cell lines and process conditions. Errors of transferred models in the declining phase of the culture range for linear models around +100% or worse, causing unnecessary delays with test runs during bioprocess development. The goal of this work was to develop one single universal model which can be adapted by considering a potentially mechanistic factor to estimate biomass in yet untested clones and scales. The novelty of this work is a methodology to select sensitive frequencies to build a statistical model which can be shared among fermentations with an error between 9% and 38% (mean error around 20%) for the whole process, including the declining phase. A simple linear factor was found to be responsible for the transferability of biomass models between cell lines, indicating a link to their phenotype or physiology.

  8. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  9. Comparison of Test and Finite Element Analysis for Two Full-Scale Helicopter Crash Tests

    NASA Technical Reports Server (NTRS)

    Annett, Martin S.; Horta,Lucas G.

    2011-01-01

    Finite element analyses have been performed for two full-scale crash tests of an MD-500 helicopter. The first crash test was conducted to evaluate the performance of a composite deployable energy absorber under combined flight loads. In the second crash test, the energy absorber was removed to establish the baseline loads. The use of an energy absorbing device reduced the impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to analytical results. Details of the full-scale crash tests and development of the system-integrated finite element model are briefly described along with direct comparisons of acceleration magnitudes and durations for the first full-scale crash test. Because load levels were significantly different between tests, models developed for the purposes of predicting the overall system response with external energy absorbers were not adequate under more severe conditions seen in the second crash test. Relative error comparisons were inadequate to guide model calibration. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used for the second full-scale crash test. The calibrated parameter set reduced 2-norm prediction error by 51% but did not improve impact shape orthogonality.

  10. Linear Parameter Varying Control for Actuator Failure

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    A robust linear parameter varying (LPV) control synthesis is carried out for an HiMAT vehicle subject to loss of control effectiveness. The scheduling parameter is selected to be a function of the estimates of the control effectiveness factors. The estimates are provided on-line by a two-stage Kalman estimator. The inherent conservatism of the LPV design is reducing through the use of a scaling factor on the uncertainty block that represents the estimation errors of the effectiveness factors. Simulations of the controlled system with the on-line estimator show that a superior fault-tolerance can be achieved.

  11. The effect of photometric redshift uncertainties on galaxy clustering and baryonic acoustic oscillations

    NASA Astrophysics Data System (ADS)

    Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos

    2018-07-01

    In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAOs). Using analytic expressions and results from 1000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAOs, and the cosmological information in them. We find that (a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; (b) photo-z errors decrease the smearing of BAOs due to non-linear redshift-space distortions (RSDs) by giving less weight to line-of-sight modes; and (c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.

  12. The effect of photometric redshift uncertainties on galaxy clustering and baryonic acoustic oscillations

    NASA Astrophysics Data System (ADS)

    Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos

    2018-04-01

    In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAO). Using analytic expressions and results from 1 000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAO, and the cosmological information in them. We find that: a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; b) photo-z errors decrease the smearing of BAO due to non-linear redshift-space distortions (RSD) by giving less weight to line-of-sight modes; and c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.

  13. Absolute color scale for improved diagnostics with wavefront error mapping.

    PubMed

    Smolek, Michael K; Klyce, Stephen D

    2007-11-01

    Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.

  14. Sources of errors and uncertainties in the assessment of forest soil carbon stocks at different scales-review and recommendations.

    PubMed

    Vanguelova, E I; Bonifacio, E; De Vos, B; Hoosbeek, M R; Berger, T W; Vesterdal, L; Armolaitis, K; Celi, L; Dinca, L; Kjønaas, O J; Pavlenda, P; Pumpanen, J; Püttsepp, Ü; Reidy, B; Simončič, P; Tobin, B; Zhiyanski, M

    2016-11-01

    Spatially explicit knowledge of recent and past soil organic carbon (SOC) stocks in forests will improve our understanding of the effect of human- and non-human-induced changes on forest C fluxes. For SOC accounting, a minimum detectable difference must be defined in order to adequately determine temporal changes and spatial differences in SOC. This requires sufficiently detailed data to predict SOC stocks at appropriate scales within the required accuracy so that only significant changes are accounted for. When designing sampling campaigns, taking into account factors influencing SOC spatial and temporal distribution (such as soil type, topography, climate and vegetation) are needed to optimise sampling depths and numbers of samples, thereby ensuring that samples accurately reflect the distribution of SOC at a site. Furthermore, the appropriate scales related to the research question need to be defined: profile, plot, forests, catchment, national or wider. Scaling up SOC stocks from point sample to landscape unit is challenging, and thus requires reliable baseline data. Knowledge of the associated uncertainties related to SOC measures at each particular scale and how to reduce them is crucial for assessing SOC stocks with the highest possible accuracy at each scale. This review identifies where potential sources of errors and uncertainties related to forest SOC stock estimation occur at five different scales-sample, profile, plot, landscape/regional and European. Recommendations are also provided on how to reduce forest SOC uncertainties and increase efficiency of SOC assessment at each scale.

  15. Short version of the Depression Anxiety Stress Scale-21: is it valid for Brazilian adolescents?

    PubMed

    Silva, Hítalo Andrade da; Passos, Muana Hiandra Pereira Dos; Oliveira, Valéria Mayaly Alves de; Palmeira, Aline Cabral; Pitangui, Ana Carolina Rodarti; Araújo, Rodrigo Cappato de

    2016-01-01

    To evaluate the interday reproducibility, agreement and validity of the construct of short version of the Depression Anxiety Stress Scale-21 applied to adolescents. The sample consisted of adolescents of both sexes, aged between 10 and 19 years, who were recruited from schools and sports centers. The validity of the construct was performed by exploratory factor analysis, and reliability was calculated for each construct using the intraclass correlation coefficient, standard error of measurement and the minimum detectable change. The factor analysis combining the items corresponding to anxiety and stress in a single factor, and depression in a second factor, showed a better match of all 21 items, with higher factor loadings in their respective constructs. The reproducibility values for depression were intraclass correlation coefficient with 0.86, standard error of measurement with 0.80, and minimum detectable change with 2.22; and, for anxiety/stress: intraclass correlation coefficient with 0.82, standard error of measurement with 1.80, and minimum detectable change with 4.99. The short version of the Depression Anxiety Stress Scale-21 showed excellent values of reliability, and strong internal consistency. The two-factor model with condensation of the constructs anxiety and stress in a single factor was the most acceptable for the adolescent population. Avaliar a reprodutibilidade interdias, a concordância e a validade do construto da versão reduzida da Depression Anxiety Stress Scale-21 aplicada a adolescentes. A amostra foi composta por adolescentes de ambos os sexos, com idades entre 10 e 19 anos, recrutados de escolas e centros esportivos. A validade de construto foi realizada por análise fatorial exploratória, e a confiabilidade foi calculada para cada construto, por meio de coeficiente de correlação intraclasse, erro padrão de medida e mudança mínima detectável. A análise fatorial combinando os itens correspondentes a ansiedade e estresse em um único fator, e depressão em um segundo fator apresentou melhor adequação de todos os 21 itens, com cargas fatoriais mais altas em seus respectivos construtos. Os valores de reprodutibilidade para a depressão foram coeficiente de correlação intraclasse com 0,86, erros padrão de medida com 0,80 e mudança mínima detectável com 2,22 e, para a ansiedade/estresse, foram coeficiente de correlação intraclasse com 0,82, erro padrão de medida com 1,80 e mudança mínima detectável com 4,99. A versão reduzida da Depression Anxiety Stress Scale-21 apresentou excelentes valores de confiabilidade e também uma forte consistência interna. O modelo de dois fatores com a condensação dos construtos ansiedade e estresse em um único fator foi o mais aceitável para a população adolescente.

  16. Evaluation of SMART sensor displays for multidimensional precision control of Space Shuttle remote manipulator

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.; Brown, J. W.; Lewis, J. L.

    1982-01-01

    An enhanced proximity sensor and display system was developed at the Jet Propulsion Laboratory (JPL) and tested on the full scale Space Shuttle Remote Manipulator at the Johnson Space Center (JSC) Manipulator Development Facility (MDF). The sensor system, integrated with a four-claw end effector, measures range error up to 6 inches, and pitch and yaw alignment errors within + or 15 deg., and displays error data on both graphic and numeric displays. The errors are referenced to the end effector control axes through appropriate data processing by a dedicated microcomputer acting on the sensor data in real time. Both display boxes contain a green lamp which indicates whether the combination of range, pitch and yaw errors will assure a successful grapple. More than 200 test runs were completed in early 1980 by three operators at JSC for grasping static and capturing slowly moving targets. The tests have indicated that the use of graphic/numeric displays of proximity sensor information improves precision control of grasp/capture range by more than a factor of two for both static and dynamic grapple conditions.

  17. An adaptive filter method for spacecraft using gravity assist

    NASA Astrophysics Data System (ADS)

    Ning, Xiaolin; Huang, Panpan; Fang, Jiancheng; Liu, Gang; Ge, Shuzhi Sam

    2015-04-01

    Celestial navigation (CeleNav) has been successfully used during gravity assist (GA) flyby for orbit determination in many deep space missions. Due to spacecraft attitude errors, ephemeris errors, the camera center-finding bias, and the frequency of the images before and after the GA flyby, the statistics of measurement noise cannot be accurately determined, and yet have time-varying characteristics, which may introduce large estimation error and even cause filter divergence. In this paper, an unscented Kalman filter (UKF) with adaptive measurement noise covariance, called ARUKF, is proposed to deal with this problem. ARUKF scales the measurement noise covariance according to the changes in innovation and residual sequences. Simulations demonstrate that ARUKF is robust to the inaccurate initial measurement noise covariance matrix and time-varying measurement noise. The impact factors in the ARUKF are also investigated.

  18. Provider risk factors for medication administration error alerts: analyses of a large-scale closed-loop medication administration system using RFID and barcode.

    PubMed

    Hwang, Yeonsoo; Yoon, Dukyong; Ahn, Eun Kyoung; Hwang, Hee; Park, Rae Woong

    2016-12-01

    To determine the risk factors and rate of medication administration error (MAE) alerts by analyzing large-scale medication administration data and related error logs automatically recorded in a closed-loop medication administration system using radio-frequency identification and barcodes. The subject hospital adopted a closed-loop medication administration system. All medication administrations in the general wards were automatically recorded in real-time using radio-frequency identification, barcodes, and hand-held point-of-care devices. MAE alert logs recorded during a full 1 year of 2012. We evaluated risk factors for MAE alerts including administration time, order type, medication route, the number of medication doses administered, and factors associated with nurse practices by logistic regression analysis. A total of 2 874 539 medication dose records from 30 232 patients (882.6 patient-years) were included in 2012. We identified 35 082 MAE alerts (1.22% of total medication doses). The MAE alerts were significantly related to administration at non-standard time [odds ratio (OR) 1.559, 95% confidence interval (CI) 1.515-1.604], emergency order (OR 1.527, 95%CI 1.464-1.594), and the number of medication doses administered (OR 0.993, 95%CI 0.992-0.993). Medication route, nurse's employment duration, and working schedule were also significantly related. The MAE alert rate was 1.22% over the 1-year observation period in the hospital examined in this study. The MAE alerts were significantly related to administration time, order type, medication route, the number of medication doses administered, nurse's employment duration, and working schedule. The real-time closed-loop medication administration system contributed to improving patient safety by preventing potential MAEs. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. WISC-R Examiner Errors: Cause for Concern.

    ERIC Educational Resources Information Center

    Slate, John R.; Chick, David

    1989-01-01

    Clinical psychology graduate students (N=14) administered Wechsler Intelligence Scale for Children-Revised. Found numerous scoring and mechanical errors that influenced full-scale intelligence quotient scores on two-thirds of protocols. Particularly prone to error were Verbal subtests of Vocabulary, Comprehension, and Similarities. Noted specific…

  20. The effect of short ground vegetation on terrestrial laser scans at a local scale

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Powrie, William; Smethurst, Joel; Atkinson, Peter M.; Einstein, Herbert

    2014-09-01

    Terrestrial laser scanning (TLS) can record a large amount of accurate topographical information with a high spatial accuracy over a relatively short period of time. These features suggest it is a useful tool for topographical survey and surface deformation detection. However, the use of TLS to survey a terrain surface is still challenging in the presence of dense ground vegetation. The bare ground surface may not be illuminated due to signal occlusion caused by vegetation. This paper investigates vegetation-induced elevation error in TLS surveys at a local scale and its spatial pattern. An open, relatively flat area vegetated with dense grass was surveyed repeatedly under several scan conditions. A total station was used to establish an accurate representation of the bare ground surface. Local-highest-point and local-lowest-point filters were applied to the point clouds acquired for deriving vegetation height and vegetation-induced elevation error, respectively. The effects of various factors (for example, vegetation height, edge effects, incidence angle, scan resolution and location) on the error caused by vegetation are discussed. The results are of use in the planning and interpretation of TLS surveys of vegetated areas.

  1. On scaling cosmogenic nuclide production rates for altitude and latitude using cosmic-ray measurements

    NASA Astrophysics Data System (ADS)

    Desilets, Darin; Zreda, Marek

    2001-11-01

    The wide use of cosmogenic nuclides for dating terrestrial landforms has prompted a renewed interest in characterizing the spatial distribution of terrestrial cosmic rays. Cosmic-ray measurements from neutron monitors, nuclear emulsions and cloud chambers have played an important role in developing new models for scaling cosmic-ray neutron intensities and, indirectly, cosmogenic production rates. Unfortunately, current scaling models overlook or misinterpret many of these data. In this paper, we describe factors that must be considered when using neutron measurements to determine scaling formulations for production rates of cosmogenic nuclides. Over the past 50 years, the overwhelming majority of nucleon flux measurements have been taken with neutron monitors. However, in order to use these data for scaling spallation reactions, the following factors must be considered: (1) sensitivity of instruments to muons and to background, (2) instrumental biases in energy sensitivity, (3) solar activity, and (4) the way of ordering cosmic-ray data in the geomagnetic field. Failure to account for these factors can result in discrepancies of as much as 7% in neutron attenuation lengths measured at the same location. This magnitude of deviation can result in an error on the order of 20% in cosmogenic production rates scaled from 4300 m to sea level. The shapes of latitude curves of nucleon flux also depend on these factors to a measurable extent, thereby causing additional uncertainties in cosmogenic production rates. The corrections proposed herein significantly improve our ability to transfer scaling formulations based on neutron measurements to scaling formulations applicable to spallation reactions, and, therefore, constitute an important advance in cosmogenic dating methodology.

  2. Psychometric evaluation of the Swedish version of the pure procrastination scale, the irrational procrastination scale, and the susceptibility to temptation scale in a clinical population.

    PubMed

    Rozental, Alexander; Forsell, Erik; Svensson, Andreas; Forsström, David; Andersson, Gerhard; Carlbring, Per

    2014-01-01

    Procrastination is a prevalent self-regulatory failure associated with stress and anxiety, decreased well-being, and poorer performance in school as well as work. One-fifth of the adult population and half of the student population describe themselves as chronic and severe procrastinators. However, despite the fact that it can become a debilitating condition, valid and reliable self-report measures for assessing the occurrence and severity of procrastination are lacking, particularly for use in a clinical context. The current study explored the usefulness of the Swedish version of three Internet-administered self-report measures for evaluating procrastination; the Pure Procrastination Scale, the Irrational Procrastination Scale, and the Susceptibility to Temptation Scale, all having good psychometric properties in English. In total, 710 participants were recruited for a clinical trial of Internet-based cognitive behavior therapy for procrastination. All of the participants completed the scales as well as self-report measures of depression, anxiety, and quality of life. Principal Component Analysis was performed to assess the factor validity of the scales, and internal consistency and correlations between the scales were also determined. Intraclass Correlation Coefficient, Minimal Detectable Change, and Standard Error of Measurement were calculated for the Irrational Procrastination Scale. The Swedish version of the scales have a similar factor structure as the English version, generated good internal consistencies, with Cronbach's α ranging between .76 to .87, and were moderately to highly intercorrelated. The Irrational Procrastination Scale had an Intraclass Correlation Coefficient of .83, indicating excellent reliability. Furthermore, Standard Error of Measurement was 1.61, and Minimal Detectable Change was 4.47, suggesting that a change of almost five points on the scale is necessary to determine a reliable change in self-reported procrastination severity. The current study revealed that the Pure Procrastination Scale, the Irrational Procrastination Scale, and the Susceptibility to Temptation Scale are both valid and reliable from a psychometric perspective, and that they might be used for assessing the occurrence and severity of procrastination via the Internet. The current study is part of a clinical trial assessing the efficacy of Internet-based cognitive behavior therapy for procrastination, and was registered 04/22/2013 on ClinicalTrials.gov (NCT01842945).

  3. Cross-cultural adaptation and psychometric evaluations of the Turkish version of Parkinson Fatigue Scale.

    PubMed

    Ozturk, Erhan Arif; Kocer, Bilge Gonenli; Umay, Ebru; Cakci, Aytul

    2018-06-07

    The objectives of the present study were to translate and cross-culturally adapt the English version of the Parkinson Fatigue Scale into Turkish, to evaluate its psychometric properties, and to compare them with that of other language versions. A total of 144 patients with idiopathic Parkinson disease were included in the study. The Turkish version of Parkinson Fatigue Scale was evaluated for data quality, scaling assumptions, acceptability, reliability, and validity. The questionnaire response rate was 100% for both test and retest. The percentage of missing data was zero for items, and the percentage of computable scores was full. Floor and ceiling effects were absent. The Parkinson Fatigue Scale provides an acceptable internal consistency (Cronbach's alpha was 0.974 for 1st test and 0.964 for a retest, and corrected item-to-total correlations were ranged from 0.715 to 0.906) and test-retest reliability (Cohen's kappa coefficients were ranged from 0.632 to 0.786 for individuals items, and intraclass correlation coefficient was 0.887 for the overall Parkinson Fatigue Scale Score). An exploratory factor analysis of the items revealed a single factor explaining 71.7% of variance. The goodness-of-fit statistics for the one-factorial confirmatory factor analysis were Tucker Lewis index = 0.961, comparative fit index = 0.971 and root mean square error of approximation = 0.077 for a single factor. The average Parkinson Fatigue Scale Score was correlated significantly with sociodemographic data, clinical characteristics and scores of rating scales. The Turkish version of the Parkinson Fatigue Scale seems to be culturally well adapted and have good psychometric properties. The scale can be used in further studies to assess the fatigue in patients with Parkinson's disease.

  4. The self-transcendence scale: an investigation of the factor structure among nursing home patients.

    PubMed

    Haugan, Gørill; Rannestad, Toril; Garåsen, Helge; Hammervold, Randi; Espnes, Geir Arild

    2012-09-01

    Self-transcendence, the ability to expand personal boundaries in multiple ways, has been found to provide well-being. The purpose of this study was to examine the dimensionality of the Norwegian version of the Self-Transcendence Scale, which comprises 15 items. Reed's empirical nursing theory of self-transcendence provided the theoretical framework; self-transcendence includes an interpersonal, intrapersonal, transpersonal, and temporal dimension. Cross-sectional data were obtained from a sample of 202 cognitively intact elderly patients in 44 Norwegian nursing homes. Exploratory factor analysis revealed two and four internally consistent dimensions of self-transcendence, explaining 35.3% (two factors) and 50.7% (four factors) of the variance, respectively. Confirmatory factor analysis indicated that the hypothesized two- and four-factor models fitted better than the one-factor model (cx (2), root mean square error of approximation, standardized root mean square residual, normed fit index, nonnormed fit index, comparative fit index, goodness-of-fit index, and adjusted goodness-of-fit index). The findings indicate self-transcendence as a multifactorial construct; at present, we conclude that the two-factor model might be the most accurate and reasonable measure of self-transcendence. This research generates insights in the application of the widely used Self-Transcendence Scale by investigating its psychometric properties by applying a confirmatory factor analysis. It also generates new research-questions on the associations between self-transcendence and well-being.

  5. Ensemble Kalman filters for dynamical systems with unresolved turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.

    Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less

  6. Melodic interval perception by normal-hearing listeners and cochlear implant users

    PubMed Central

    Luo, Xin; Masterson, Megan E.; Wu, Ching-Chih

    2014-01-01

    The perception of melodic intervals (sequential pitch differences) is essential to music perception. This study tested melodic interval perception in normal-hearing (NH) listeners and cochlear implant (CI) users. Melodic interval ranking was tested using an adaptive procedure. CI users had slightly higher interval ranking thresholds than NH listeners. Both groups' interval ranking thresholds, although not affected by root note, significantly increased with standard interval size and were higher for descending intervals than for ascending intervals. The pitch direction effect may be due to a procedural artifact or a difference in central processing. In another test, familiar melodies were played with all the intervals scaled by a single factor. Subjects rated how in tune the melodies were and adjusted the scaling factor until the melodies sounded the most in tune. CI users had lower final interval ratings and less change in interval rating as a function of scaling factor than NH listeners. For CI users, the root-mean-square error of the final scaling factors and the width of the interval rating function were significantly correlated with the average ranking threshold for ascending rather than descending intervals, suggesting that CI users may have focused on ascending intervals when rating and adjusting the melodies. PMID:25324084

  7. The Shame and Guilt Scales of the Test of Self-Conscious Affect-Adolescent (TOSCA-A): Factor Structure, Concurrent and Discriminant Validity, and Measurement and Structural Invariance Across Ratings of Males and Females.

    PubMed

    Watson, Shaun; Gomez, Rapson; Gullone, Eleonora

    2017-06-01

    This study examined various psychometric properties of the items comprising the shame and guilt scales of the Test of Self-Conscious Affect-Adolescent. A total of 563 adolescents (321 females and 242 males) completed these scales, and also measures of depression and empathy. Confirmatory factor analysis provided support for an oblique two-factor model, with the originally proposed shame and guilt items comprising shame and guilt factors, respectively. Also, shame correlated with depression positively and had no relation with empathy. Guilt correlated with depression negatively and with empathy positively. Thus, there was support for the convergent and discriminant validity of the shame and guilt factors. Multiple-group confirmatory factor analysis comparing females and males, based on the chi-square difference test, supported full metric invariance, the intercept invariance of 26 of the 30 shame and guilt items, and higher latent mean scores among females for both shame and guilt. Comparisons based on the difference in root mean squared error of approximation values supported full measurement invariance and no gender difference for latent mean scores. The psychometric and practical implications of the findings are discussed.

  8. Measuring Alexithymia via Trait Approach-I: A Alexithymia Scale Item Selection and Formation of Factor Structure

    PubMed Central

    TATAR, Arkun; SALTUKOĞLU, Gaye; ALİOĞLU, Seda; ÇİMEN, Sümeyye; GÜVEN, Hülya; AY, Çağla Ebru

    2017-01-01

    Introduction It is not clear in the literature whether available instruments are sufficient to measure alexithymia because of its theoretical structure. Moreover, it has been reported that several measuring instruments are needed to measure this construct, and all the instruments have different error sources. The old and the new forms of Toronto Alexithymia Scale are the only instruments available in Turkish. Thus, the purpose of this study was to develop a new scale to measure alexithymia, selecting items and constructing the factor structure. Methods A total of 1117 patients aged from 19 to 82 years (mean = 35.05 years) were included. A 100-item pool was prepared and applied to 628 women and 489 men. Data were analyzed using Explanatory Factor Analysis, Confirmatory Factor Analysis, and Item Response Theory and 28 items were selected. The new form of 28 items was applied to 415 university students, including 271 women and 144 men aged from 18 to 30 (mean=21.44). Results The results of Explanatory Factor Analysis revealed a five-factor construct of “Solving and Expressing Affective Experiences,” “External Locused Cognitive Style,” “Tendency to Somatize Affections,” “Imaginary Life and Visualization,” and “Acting Impulsively,” along with a two-factor construct representing the “Affective” and “Cognitive” components. All the components of the construct showed good model fit and high internal consistency. The new form was tested in terms of internal consistency, test-retest reliability, and concurrent validity using Toronto Alexithymia Scale as criteria and discriminative validity using Five-Factor Personality Inventory Short Form. Conclusion The results showed that the new scale met the basic psychometric requirements. Results have been discussed in line with related studies. PMID:29033633

  9. DEKFIS user's guide: Discrete Extended Kalman Filter/Smoother program for aircraft and rotorcraft data consistency

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program DEKFIS (discrete extended Kalman filter/smoother), formulated for aircraft and helicopter state estimation and data consistency, is described. DEKFIS is set up to pre-process raw test data by removing biases, correcting scale factor errors and providing consistency with the aircraft inertial kinematic equations. The program implements an extended Kalman filter/smoother using the Friedland-Duffy formulation.

  10. Safety climate and its association with office type and team involvement in primary care.

    PubMed

    Gehring, Katrin; Schwappach, David L B; Battaglia, Markus; Buff, Roman; Huber, Felix; Sauter, Peter; Wieser, Markus

    2013-09-01

    To assess differences in safety climate perceptions between occupational groups and types of office organization in primary care. Primary care physicians and nurses working in outpatient offices were surveyed about safety climate. Explorative factor analysis was performed to determine the factorial structure. Differences in mean climate scores between staff groups and types of office were tested. Logistic regression analysis was conducted to determine predictors for a 'favorable' safety climate. 630 individuals returned the survey (response rate, 50%). Differences between occupational groups were observed in the means of the 'team-based error prevention'-scale (physician 4.0 vs. nurse 3.8, P < 0.001). Medical centers scored higher compared with single-handed offices and joint practices on the 'team-based error prevention'-scale (4.3 vs. 3.8 vs. 3.9, P < 0.001) but less favorable on the 'rules and risks'-scale (3.5 vs. 3.9 vs. 3.7, P < 0.001). Characteristics on the individual and office level predicted favorable 'team-based error prevention'-scores. Physicians (OR = 0.4, P = 0.01) and less experienced staff (OR 0.52, P = 0.04) were less likely to provide favorable scores. Individuals working at medical centers were more likely to provide positive scores compared with single-handed offices (OR 3.33, P = 0.001). The largest positive effect was associated with at least monthly team meetings (OR 6.2, P < 0.001) and participation in quality circles (OR 4.49, P < 0.001). Results indicate that frequent quality circle participation and team meetings involving all team members are effective ways to strengthen safety climate in terms of team-based strategies and activities in error prevention.

  11. AQMEII3 evaluation of regional NA/EU simulations and ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII. The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impac

  12. Assessing the construct validity and reliability of the Parental Perception on Antibiotics (PAPA) scales.

    PubMed

    Alumran, Arwa; Hou, Xiang-Yu; Sun, Jiandong; Yousef, Abdullah A; Hurst, Cameron

    2014-01-23

    The overuse of antibiotics is becoming an increasing concern. Antibiotic resistance, which increases both the burden of disease, and the cost of health services, is perhaps the most profound impact of antibiotics overuse. Attempts have been made to develop instruments to measure the psychosocial constructs underlying antibiotics use, however, none of these instruments have undergone thorough psychometric validation. This study evaluates the psychometric properties of the Parental Perceptions on Antibiotics (PAPA) scales. The PAPA scales attempt to measure the factors influencing parental use of antibiotics in children. 1111 parents of children younger than 12 years old were recruited from primary schools' parental meetings in the Eastern Province of Saudi Arabia from September 2012 to January 2013. The structure of the PAPA instrument was validated using Confirmatory Factor Analysis (CFA) with measurement model fit evaluated using the raw and scaled χ2, Goodness of Fit Index, and Root Mean Square Error of Approximation. A five-factor model was confirmed with the model showing good fit. Constructs in the model include: Knowledge and Beliefs, Behaviors, Sources of information, Adherence, and Awareness about antibiotics resistance. The instrument was shown to have good internal consistency, and good discriminant and convergent validity. The availability of an instrument able to measure the psychosocial factors underlying antibiotics usage allows the risk factors underlying antibiotic use and overuse to now be investigated.

  13. A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.

    PubMed

    Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong

    2015-05-05

    The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  14. Association of resident fatigue and distress with perceived medical errors.

    PubMed

    West, Colin P; Tan, Angelina D; Habermann, Thomas M; Sloan, Jeff A; Shanafelt, Tait D

    2009-09-23

    Fatigue and distress have been separately shown to be associated with medical errors. The contribution of each factor when assessed simultaneously is unknown. To determine the association of fatigue and distress with self-perceived major medical errors among resident physicians using validated metrics. Prospective longitudinal cohort study of categorical and preliminary internal medicine residents at Mayo Clinic, Rochester, Minnesota. Data were provided by 380 of 430 eligible residents (88.3%). Participants began training from 2003 to 2008 and completed surveys quarterly through February 2009. Surveys included self-assessment of medical errors, linear analog self-assessment of overall quality of life (QOL) and fatigue, the Maslach Burnout Inventory, the PRIME-MD depression screening instrument, and the Epworth Sleepiness Scale. Frequency of self-perceived, self-defined major medical errors was recorded. Associations of fatigue, QOL, burnout, and symptoms of depression with a subsequently reported major medical error were determined using generalized estimating equations for repeated measures. The mean response rate to individual surveys was 67.5%. Of the 356 participants providing error data (93.7%), 139 (39%) reported making at least 1 major medical error during the study period. In univariate analyses, there was an association of subsequent self-reported error with the Epworth Sleepiness Scale score (odds ratio [OR], 1.10 per unit increase; 95% confidence interval [CI], 1.03-1.16; P = .002) and fatigue score (OR, 1.14 per unit increase; 95% CI, 1.08-1.21; P < .001). Subsequent error was also associated with burnout (ORs per 1-unit change: depersonalization OR, 1.09; 95% CI, 1.05-1.12; P < .001; emotional exhaustion OR, 1.06; 95% CI, 1.04-1.08; P < .001; lower personal accomplishment OR, 0.94; 95% CI, 0.92-0.97; P < .001), a positive depression screen (OR, 2.56; 95% CI, 1.76-3.72; P < .001), and overall QOL (OR, 0.84 per unit increase; 95% CI, 0.79-0.91; P < .001). Fatigue and distress variables remained statistically significant when modeled together with little change in the point estimates of effect. Sleepiness and distress, when modeled together, showed little change in point estimates of effect, but sleepiness no longer had a statistically significant association with errors when adjusted for burnout or depression. Among internal medicine residents, higher levels of fatigue and distress are independently associated with self-perceived medical errors.

  15. Conditional Standard Errors of Measurement for Scale Scores.

    ERIC Educational Resources Information Center

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  16. A First Look at the Navigation Design and Analysis for the Orion Exploration Mission 2

    NASA Technical Reports Server (NTRS)

    D'Souza, Chris D.; Zenetti, Renato

    2017-01-01

    This paper will detail the navigation and dispersion design and analysis of the first Orion crewed mission. The optical navigation measurement model will be described. The vehicle noise includes the residual acceleration from attitude deadbanding, attitude maneuvers, CO2 venting, wastewater venting, ammonia sublimator venting and solar radiation pressure. The maneuver execution errors account for the contribution of accelerometer scale-factor on the accuracy of the maneuver execution. Linear covariance techniques are used to obtain the navigation errors and the trajectory dispersions as well as the DV performance. Particular attention will be paid to the accuracy of the delivery at Earth Entry Interface and at the Lunar Flyby.

  17. The hubble constant.

    PubMed

    Huchra, J P

    1992-04-17

    The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy The Hubble constant is the constant of proportionality between recession velocity and development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution.

  18. Collision geometry scaling of Au+Au pseudorapidity density from √(sNN )=19.6 to 200 GeV

    NASA Astrophysics Data System (ADS)

    Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Decowski, M. P.; García, E.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steinberg, P.; Stephans, G. S.; Sukhanov, A.; Tonjes, M. B.; Tang, J.-L.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Verdier, R.; Wolfs, F. L.; Wosiek, B.; Woźniak, K.; Wuosmaa, A. H.; Wysłouch, B.

    2004-08-01

    The centrality dependence of the midrapidity charged particle multiplicity in Au+Au heavy-ion collisions at √(sNN )=19.6 and 200 GeV is presented. Within a simple model, the fraction of hard (scaling with number of binary collisions) to soft (scaling with number of participant pairs) interactions is consistent with a value of x=0.13±0.01 (stat) ±0.05 (syst) at both energies. The experimental results at both energies, scaled by inelastic p ( p¯ ) +p collision data, agree within systematic errors. The ratio of the data was found not to depend on centrality over the studied range and yields a simple linear scale factor of R200/19.6 =2.03±0.02 (stat) ±0.05 (syst) .

  19. Measurement Noninvariance of Safer Sex Self-Efficacy Between Heterosexual and Sexual Minority Black Youth.

    PubMed

    Gerke, Donald; Budd, Elizabeth L; Plax, Kathryn

    2016-01-01

    Black and lesbian, gay, bisexual, or questioning (LGBQ) youth in the United States are disproportionately affected by HIV and other sexually transmitted diseases (STDs). Although self-efficacy is strongly, positively associated with safer sex behaviors, no studies have examined the validity of a safer sex self-efficacy scale used by many federally funded HIV/STD prevention programs. This study aims to test factor validity of the Sexual Self-Efficacy Scale by using confirmatory factor analysis (CFA) to determine if scale validity varies between heterosexual and LGBQ Black youth. The study uses cross-sectional data collected through baseline surveys with 226 Black youth (15 to 24 years) enrolled in community-based HIV-prevention programs. Participants use a 4-point Likert-type scale to report their confidence in performing 6 healthy sexual behaviors. CFAs are conducted on 2 factor structures of the scale. Using the best-fitting model, the scale is tested for measurement invariance between the 2 groups. A single-factor model with correlated errors of condom-specific items fits the sample well and, when tested with the heterosexual group, the model demonstrates good fit. However, when tested with the LGBQ group, the same model yields poor fit, indicating factorial noninvariance between the groups. The Sexual Self-Efficacy Scale does not perform equally well among Black heterosexual and LGBQ youth. Study findings suggest additional research is needed to inform development of measures for safer sex self-efficacy among Black LGBQ youth to ensure validity of conceptual understanding and to accurately assess effectiveness of HIV/STD prevention interventions among this population.

  20. Dynamical Mass Measurements of Contaminated Galaxy Clusters Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Ntampaka, M.; Trac, H.; Sutherland, D. J.; Fromenteau, S.; Póczos, B.; Schneider, J.

    2016-11-01

    We study dynamical mass measurements of galaxy clusters contaminated by interlopers and show that a modern machine learning algorithm can predict masses by better than a factor of two compared to a standard scaling relation approach. We create two mock catalogs from Multidark’s publicly available N-body MDPL1 simulation, one with perfect galaxy cluster membership information and the other where a simple cylindrical cut around the cluster center allows interlopers to contaminate the clusters. In the standard approach, we use a power-law scaling relation to infer cluster mass from galaxy line-of-sight (LOS) velocity dispersion. Assuming perfect membership knowledge, this unrealistic case produces a wide fractional mass error distribution, with a width of {{Δ }}ε ≈ 0.87. Interlopers introduce additional scatter, significantly widening the error distribution further ({{Δ }}ε ≈ 2.13). We employ the support distribution machine (SDM) class of algorithms to learn from distributions of data to predict single values. Applied to distributions of galaxy observables such as LOS velocity and projected distance from the cluster center, SDM yields better than a factor-of-two improvement ({{Δ }}ε ≈ 0.67) for the contaminated case. Remarkably, SDM applied to contaminated clusters is better able to recover masses than even the scaling relation approach applied to uncontaminated clusters. We show that the SDM method more accurately reproduces the cluster mass function, making it a valuable tool for employing cluster observations to evaluate cosmological models.

  1. Erratum: ``Structure and Colors of Diffuse Emission in the Spitzer Galactic First Look Survey'' (ApJS, 154, 281 [2004])

    NASA Astrophysics Data System (ADS)

    Ingalls, James G.; Miville-Deschênes, M.-A.; Reach, William T.; Noriega-Crespo, A.; Carey, Sean J.; Boulanger, F.; Stolovy, S. R.; Padgett, Deborah L.; Burgdorf, M. J.; Fajardo-Acosta, S. B.; Glaccum, W. J.; Helou, G.; Hoard, D. W.; Karr, J.; O'Linger, J.; Rebull, L. M.; Rho, J.; Stauffer, J. R.; Wachter, S.

    2006-05-01

    We have discovered an error in the scaling of our IRAC 8 μm and MIPS 70 μm data, which affected the caption for Figure 1 and the vertical axis scales for Figure 2. The original units in the images displayed in Figure 1 were MJy sr-1 for 8 μm and μJy arcsec-2 for MIPS 24 and 70 μm. We incorrectly multiplied our IRAC data by 0.0425 (the conversion from μJy arcsec-2 to MJy sr-1), but neglected to multiply our MIPS 70 μm data by that factor. (MIPS 24 μm data were scaled correctly.) Thus, contrary to the caption of Figure 1, the gray levels for panels (a) and (b) actually range from 6.7 to 7.8 MJy sr-1, and the gray levels for panels (e) and (f) actually range from 0.85 to 20.8 MJy sr-1. The power spectra in Figure 2 should have been normalized such that the integral over the spectrum equals the mean square image surface brightness. In the original paper, however, the IRAC power spectrum was incorrectly multiplied by (0.0425)2, whereas the MIPS 70 μm spectrum should have been multiplied by this factor but was not. We correct this in a revised version of Figure 2 included here. We thank Rick Arendt for calling our attention to this error.

  2. Developments in the realization of diffuse reflectance scales at NPL

    NASA Astrophysics Data System (ADS)

    Chunnilall, Christopher J.; Clarke, Frank J. J.; Shaw, Michael J.

    2005-08-01

    The United Kingdom scales for diffuse reflectance are realized using two primary instruments. In the 360 nm to 2.5 μm spectral region the National Reference Reflectometer (NRR) realizes absolute measurement of reflectance and radiance factor by goniometric measurements. Hemispherical reflectance scales are obtained through the spatial integration of these goniometric measurements. In the mid-infrared region (2.5 μm - 55 μm) the hemispherical reflectance scale is realized by the Absolute Hemispherical Reflectometer (AHR). This paper describes some of the uncertainties resulting from errors in aligning the NRR and non-ideality in sample topography, together with its use to carry out measurements in the 1 - 1.6 μm region. The AHR has previously been used with grating spectrometers, and has now been coupled to a Fourier transform spectrometer.

  3. Evaluation of Argos Telemetry Accuracy in the High-Arctic and Implications for the Estimation of Home-Range Size

    PubMed Central

    Christin, Sylvain; St-Laurent, Martin-Hugues; Berteaux, Dominique

    2015-01-01

    Animal tracking through Argos satellite telemetry has enormous potential to test hypotheses in animal behavior, evolutionary ecology, or conservation biology. Yet the applicability of this technique cannot be fully assessed because no clear picture exists as to the conditions influencing the accuracy of Argos locations. Latitude, type of environment, and transmitter movement are among the main candidate factors affecting accuracy. A posteriori data filtering can remove “bad” locations, but again testing is still needed to refine filters. First, we evaluate experimentally the accuracy of Argos locations in a polar terrestrial environment (Nunavut, Canada), with both static and mobile transmitters transported by humans and coupled to GPS transmitters. We report static errors among the lowest published. However, the 68th error percentiles of mobile transmitters were 1.7 to 3.8 times greater than those of static transmitters. Second, we test how different filtering methods influence the quality of Argos location datasets. Accuracy of location datasets was best improved when filtering in locations of the best classes (LC3 and 2), while the Douglas Argos filter and a homemade speed filter yielded similar performance while retaining more locations. All filters effectively reduced the 68th error percentiles. Finally, we assess how location error impacted, at six spatial scales, two common estimators of home-range size (a proxy of animal space use behavior synthetizing movements), the minimum convex polygon and the fixed kernel estimator. Location error led to a sometimes dramatic overestimation of home-range size, especially at very local scales. We conclude that Argos telemetry is appropriate to study medium-size terrestrial animals in polar environments, but recommend that location errors are always measured and evaluated against research hypotheses, and that data are always filtered before analysis. How movement speed of transmitters affects location error needs additional research. PMID:26545245

  4. Improvement of gray-scale representation of horizontally scanning holographic display using error diffusion.

    PubMed

    Matsumoto, Yuji; Takaki, Yasuhiro

    2014-06-15

    Horizontally scanning holography can enlarge both screen size and viewing zone angle. A microelectromechanical-system spatial light modulator, which can generate only binary images, is used to generate hologram patterns. Thus, techniques to improve gray-scale representation in reconstructed images should be developed. In this study, the error diffusion technique was used for the binarization of holograms. When the Floyd-Steinberg error diffusion coefficients were used, gray-scale representation was improved. However, the linearity in the gray-scale representation was not satisfactory. We proposed the use of a correction table and showed that the linearity was greatly improved.

  5. Study on relationship of performance shaping factor in human error probability with prevalent stress of PUSPATI TRIGA reactor operators

    NASA Astrophysics Data System (ADS)

    Rahim, Ahmad Nabil Bin Ab; Mohamed, Faizal; Farid, Mohd Fairus Abdul; Fazli Zakaria, Mohd; Sangau Ligam, Alfred; Ramli, Nurhayati Binti

    2018-01-01

    Human factor can be affected by prevalence stress measured using Depression, Anxiety and Stress Scale (DASS). From the respondents feedback can be summarized that the main factor causes the highest prevalence stress is due to the working conditions that require operators to handle critical situation and make a prompt critical decisions. The relationship between the prevalence stress and performance shaping factors found that PSFFitness and PSFWork Process showed positive Pearson’s Correlation with the score of .763 and .826 while the level of significance, p = .028 and p = .012. These positive correlations with good significant values between prevalence stress and human performance shaping factor (PSF) related to fitness, work processes and procedures. The higher the stress level of the respondents, the higher the score of selected for the PSFs. This is due to the higher levels of stress lead to deteriorating physical health and cognitive also worsened. In addition, the lack of understanding in the work procedures can also be a factor that causes a growing stress. The higher these values will lead to the higher the probabilities of human error occur. Thus, monitoring the level of stress among operators RTP is important to ensure the safety of RTP.

  6. Psychics, aliens, or experience? Using the Anomalistic Belief Scale to examine the relationship between type of belief and probabilistic reasoning.

    PubMed

    Prike, Toby; Arnold, Michelle M; Williamson, Paul

    2017-08-01

    A growing body of research has shown people who hold anomalistic (e.g., paranormal) beliefs may differ from nonbelievers in their propensity to make probabilistic reasoning errors. The current study explored the relationship between these beliefs and performance through the development of a new measure of anomalistic belief, called the Anomalistic Belief Scale (ABS). One key feature of the ABS is that it includes a balance of both experiential and theoretical belief items. Another aim of the study was to use the ABS to investigate the relationship between belief and probabilistic reasoning errors on conjunction fallacy tasks. As expected, results showed there was a relationship between anomalistic belief and propensity to commit the conjunction fallacy. Importantly, regression analyses on the factors that make up the ABS showed that the relationship between anomalistic belief and probabilistic reasoning occurred only for beliefs about having experienced anomalistic phenomena, and not for theoretical anomalistic beliefs. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Synoptic scale forecast skill and systematic errors in the MASS 2.0 model. [Mesoscale Atmospheric Simulation System

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.

    1985-01-01

    The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.

  8. On NUFFT-based gridding for non-Cartesian MRI

    NASA Astrophysics Data System (ADS)

    Fessler, Jeffrey A.

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.

  9. Constructing the Japanese version of the Maslach Burnout Inventory-Student Survey: Confirmatory factor analysis.

    PubMed

    Tsubakita, Takashi; Shimazaki, Kazuyo

    2016-01-01

    To examine the factorial validity of the Maslach Burnout Inventory-Student Survey, using a sample of 2061 Japanese university students majoring in the medical and natural sciences (67.9% male, 31.8% female; Mage  = 19.6 years, standard deviation = 1.5). The back-translated scale used unreversed items to assess inefficacy. The inventory's descriptive properties and Cronbach's alphas were calculated using SPSS software. The present authors compared fit indices of the null, one factor, and default three factor models via confirmatory factor analysis with maximum-likelihood estimation using AMOS software, version 21.0. Intercorrelations between exhaustion, cynicism, and inefficacy were relatively higher than in prior studies. Cronbach's alphas were 0.76, 0.85, and 0.78, respectively. Although fit indices of the hypothesized three factor model did not meet the respective criteria, the model demonstrated better fit than did the null and one factor models. The present authors added four paths between error variables within items, but the modified model did not show satisfactory fit. Subsequent analysis revealed that a bi-factor model fit the data better than did the hypothesized or modified three factor models. The Japanese version of the Maslach Burnout Inventory-Student Survey needs minor changes to improve the fit of its three factor model, but the scale as a whole can be used to adequately assess overall academic burnout in Japanese university students. Although the scale was back-translated, two items measuring exhaustion whose expressions overlapped should be modified, and all items measuring inefficacy should be reversed in order to statistically clarify the factorial difference between the scale's three factors. © 2015 The Authors. Japan Journal of Nursing Science © 2015 Japan Academy of Nursing Science.

  10. Upscaling NZ-DNDC using a regression based meta-model to estimate direct N2O emissions from New Zealand grazed pastures.

    PubMed

    Giltrap, Donna L; Ausseil, Anne-Gaëlle E

    2016-01-01

    The availability of detailed input data frequently limits the application of process-based models at large scale. In this study, we produced simplified meta-models of the simulated nitrous oxide (N2O) emission factors (EF) using NZ-DNDC. Monte Carlo simulations were performed and the results investigated using multiple regression analysis to produce simplified meta-models of EF. These meta-models were then used to estimate direct N2O emissions from grazed pastures in New Zealand. New Zealand EF maps were generated using the meta-models with data from national scale soil maps. Direct emissions of N2O from grazed pasture were calculated by multiplying the EF map with a nitrogen (N) input map. Three meta-models were considered. Model 1 included only the soil organic carbon in the top 30cm (SOC30), Model 2 also included a clay content factor, and Model 3 added the interaction between SOC30 and clay. The median annual national direct N2O emissions from grazed pastures estimated using each model (assuming model errors were purely random) were: 9.6GgN (Model 1), 13.6GgN (Model 2), and 11.9GgN (Model 3). These values corresponded to an average EF of 0.53%, 0.75% and 0.63% respectively, while the corresponding average EF using New Zealand national inventory values was 0.67%. If the model error can be assumed to be independent for each pixel then the 95% confidence interval for the N2O emissions was of the order of ±0.4-0.7%, which is much lower than existing methods. However, spatial correlations in the model errors could invalidate this assumption. Under the extreme assumption that the model error for each pixel was identical the 95% confidence interval was approximately ±100-200%. Therefore further work is needed to assess the degree of spatial correlation in the model errors. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Frequency and Distribution of Refractive Error in Adult Life: Methodology and Findings of the UK Biobank Study

    PubMed Central

    Cumberland, Phillippa M.; Bao, Yanchun; Hysi, Pirro G.; Foster, Paul J.; Hammond, Christopher J.; Rahi, Jugnoo S.

    2015-01-01

    Purpose To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. Methods U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40–69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. Results Fifty four percent of participants aged 40–69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40–44 years increasing to 46% at 65–69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). Conclusions Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study. PMID:26430771

  12. Quantifying drivers of wild pig movement across multiple spatial and temporal scales.

    PubMed

    Kay, Shannon L; Fischer, Justin W; Monaghan, Andrew J; Beasley, James C; Boughton, Raoul; Campbell, Tyler A; Cooper, Susan M; Ditchkoff, Stephen S; Hartley, Steve B; Kilgo, John C; Wisely, Samantha M; Wyckoff, A Christy; VerCauteren, Kurt C; Pepin, Kim M

    2017-01-01

    The movement behavior of an animal is determined by extrinsic and intrinsic factors that operate at multiple spatio-temporal scales, yet much of our knowledge of animal movement comes from studies that examine only one or two scales concurrently. Understanding the drivers of animal movement across multiple scales is crucial for understanding the fundamentals of movement ecology, predicting changes in distribution, describing disease dynamics, and identifying efficient methods of wildlife conservation and management. We obtained over 400,000 GPS locations of wild pigs from 13 different studies spanning six states in southern U.S.A., and quantified movement rates and home range size within a single analytical framework. We used a generalized additive mixed model framework to quantify the effects of five broad predictor categories on movement: individual-level attributes, geographic factors, landscape attributes, meteorological conditions, and temporal variables. We examined effects of predictors across three temporal scales: daily, monthly, and using all data during the study period. We considered both local environmental factors such as daily weather data and distance to various resources on the landscape, as well as factors acting at a broader spatial scale such as ecoregion and season. We found meteorological variables (temperature and pressure), landscape features (distance to water sources), a broad-scale geographic factor (ecoregion), and individual-level characteristics (sex-age class), drove wild pig movement across all scales, but both the magnitude and shape of covariate relationships to movement differed across temporal scales. The analytical framework we present can be used to assess movement patterns arising from multiple data sources for a range of species while accounting for spatio-temporal correlations. Our analyses show the magnitude by which reaction norms can change based on the temporal scale of response data, illustrating the importance of appropriately defining temporal scales of both the movement response and covariates depending on the intended implications of research (e.g., predicting effects of movement due to climate change versus planning local-scale management). We argue that consideration of multiple spatial scales within the same framework (rather than comparing across separate studies post-hoc ) gives a more accurate quantification of cross-scale spatial effects by appropriately accounting for error correlation.

  13. Application of psychometric theory to the measurement of voice quality using rating scales.

    PubMed

    Shrivastav, Rahul; Sapienza, Christine M; Nandur, Vuday

    2005-04-01

    Rating scales are commonly used to study voice quality. However, recent research has demonstrated that perceptual measures of voice quality obtained using rating scales suffer from poor interjudge agreement and reliability, especially in the mid-range of the scale. These findings, along with those obtained using multidimensional scaling (MDS), have been interpreted to show that listeners perceive voice quality in an idiosyncratic manner. Based on psychometric theory, the present research explored an alternative explanation for the poor interlistener agreement observed in previous research. This approach suggests that poor agreement between listeners may result, in part, from measurement errors related to a variety of factors rather than true differences in the perception of voice quality. In this study, 10 listeners rated breathiness for 27 vowel stimuli using a 5-point rating scale. Each stimulus was presented to the listeners 10 times in random order. Interlistener agreement and reliability were calculated from these ratings. Agreement and reliability were observed to improve when multiple ratings of each stimulus from each listener were averaged and when standardized scores were used instead of absolute ratings. The probability of exact agreement was found to be approximately .9 when using averaged ratings and standardized scores. In contrast, the probability of exact agreement was only .4 when a single rating from each listener was used to measure agreement. These findings support the hypothesis that poor agreement reported in past research partly arises from errors in measurement rather than individual differences in the perception of voice quality.

  14. Validation of the Family Inpatient Communication Survey.

    PubMed

    Torke, Alexia M; Monahan, Patrick; Callahan, Christopher M; Helft, Paul R; Sachs, Greg A; Wocial, Lucia D; Slaven, James E; Montz, Kianna; Inger, Lev; Burke, Emily S

    2017-01-01

    Although many family members who make surrogate decisions report problems with communication, there is no validated instrument to accurately measure surrogate/clinician communication for older adults in the acute hospital setting. The objective of this study was to validate a survey of surrogate-rated communication quality in the hospital that would be useful to clinicians, researchers, and health systems. After expert review and cognitive interviewing (n = 10 surrogates), we enrolled 350 surrogates (250 development sample and 100 validation sample) of hospitalized adults aged 65 years and older from three hospitals in one metropolitan area. The communication survey and a measure of decision quality were administered within hospital days 3 and 10. Mental health and satisfaction measures were administered six to eight weeks later. Factor analysis showed support for both one-factor (Total Communication) and two-factor models (Information and Emotional Support). Item reduction led to a final 30-item scale. For the validation sample, internal reliability (Cronbach's alpha) was 0.96 (total), 0.94 (Information), and 0.90 (Emotional Support). Confirmatory factor analysis fit statistics were adequate (one-factor model, comparative fit index = 0.981, root mean square error of approximation = 0.62, weighted root mean square residual = 1.011; two-factor model comparative fit index = 0.984, root mean square error of approximation = 0.055, weighted root mean square residual = 0.930). Total score and subscales showed significant associations with the Decision Conflict Scale (Pearson correlation -0.43, P < 0.001 for total score). Emotional Support was associated with improved mental health outcomes at six to eight weeks, such as anxiety (-0.19 P < 0.001), and Information was associated with satisfaction with the hospital stay (0.49, P < 0.001). The survey shows high reliability and validity in measuring communication experiences for hospital surrogates. The scale has promise for measurement of communication quality and is predictive of important outcomes, such as surrogate satisfaction and well-being. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  15. Extra dimension searches at hadron colliders to next-to-leading order-QCD

    NASA Astrophysics Data System (ADS)

    Kumar, M. C.; Mathews, Prakash; Ravindran, V.

    2007-11-01

    The quantitative impact of NLO-QCD corrections for searches of large and warped extra dimensions at hadron colliders are investigated for the Drell-Yan process. The K-factor for various observables at hadron colliders are presented. Factorisation, renormalisation scale dependence and uncertainties due to various parton distribution functions are studied. Uncertainties arising from the error on experimental data are estimated using the MRST parton distribution functions.

  16. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  17. AQMEII3 evaluation of regional NA/EU simulations and analysis of scale, boundary conditions and emissions error-dependence

    EPA Science Inventory

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  18. Evaluating the prevalence and impact of examiner errors on the Wechsler scales of intelligence: A meta-analysis.

    PubMed

    Styck, Kara M; Walsh, Shana M

    2016-01-01

    The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. (c) 2016 APA, all rights reserved).

  19. Longitudinal factorial invariance of the PedsQL 4.0 Generic Core Scales child self-report Version: one year prospective evidence from the California State Children's Health Insurance Program (SCHIP).

    PubMed

    Varni, James W; Limbers, Christine A; Newman, Daniel A; Seid, Michael

    2008-11-01

    The measurement of health-related quality of life (HRQOL) in pediatric medicine and health services research has grown significantly over the past decade. The paradigm shift toward patient-reported outcomes (PROs) has provided the opportunity to emphasize the value and critical need for pediatric patient self-report. In order for changes in HRQOL/PRO outcomes to be meaningful over time, it is essential to demonstrate longitudinal factorial invariance. This study examined the longitudinal factor structure of the PedsQL 4.0 Generic Core Scales over a one-year period for child self-report ages 5-17 in 2,887 children from a statewide evaluation of the California State Children's Health Insurance Program (SCHIP) utilizing a structural equation modeling framework. Specifying four- and five-factor measurement models, longitudinal structural equation modeling was used to compare factor structures over a one-year interval on the PedsQL 4.0 Generic Core Scales. While the four-factor conceptually-derived measurement model for the PedsQL 4.0 Generic Core Scales produced an acceptable fit, the five-factor empirically-derived measurement model from the initial field test of the PedsQL 4.0 Generic Core Scales produced a marginally superior fit in comparison to the four-factor model. For the five-factor measurement model, the best fitting model, strict factorial invariance of the PedsQL 4.0 Generic Core Scales across the two measurement occasions was supported by the stability of the comparative fit index between the unconstrained and constrained models, and several additional indices of practical fit including the root mean squared error of approximation, the non-normed fit index, and the parsimony normed fit index. The findings support an equivalent factor structure on the PedsQL 4.0 Generic Core Scales over time. Based on these data, it can be concluded that over a one-year period children in our study interpreted items on the PedsQL 4.0 Generic Core Scales in a similar manner.

  20. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    NASA Astrophysics Data System (ADS)

    Mahatme, Nihaar N.

    The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.

  1. [Confirmative study of a French version of the Exercise Dependence Scale-revised with a French population].

    PubMed

    Allegre, B; Therme, P

    2008-10-01

    Since the first writings on excessive exercise, there has been an increased interest in exercise dependence. One of the major consequences of this increased interest has been the development of several definitions and measures of exercise dependence. The work of Veale [Does primary exercise dependence really exist? In: Annet J, Cripps B, Steinberg H, editors. Exercise addiction: Motivation for participation in sport and exercise.Leicester, UK: Br Psychol Soc; 1995. p. 1-5.] provides an advance for the definition and measure of exercise dependence. These studies have adapted the DSM-IV criteria for substance dependence to measure exercise dependence. The Exercise Dependence Scale-Revised is based on these diagnostic criteria, which are: tolerance; withdrawal effects; intention effect; lack of control; time; reductions in other activities; continuance. Confirmatory factor analyses of EDS-R provided support to present a measurement model (21 items loaded in seven factors) of EDS-R (Comparative Fit Index=0.97; Root mean Square Error of Approximation=0.05; Tucker-Lewis Index=0.96). The aim of this study was to examine the psychometric properties of a French version of the EDS-R [Factorial validity and psychometric examination of the exercise dependence scale-revised. Meas Phys Educ Exerc Sci 2004;8(4):183-201.] to test the stability of the seven-factor model of the original version with a French population. A total of 516 half-marathoners ranged in age from 17 to 74 years old (Mean age=39.02 years, ET=10.64), with 402 men (77.9%) and 114 women (22.1%) participated in the study. The principal component analysis results in a six-factor structure, which accounts for 68.60% of the total variance. Because principal component analysis presents a six-factor structure differing from the original seven-factor structure, two models were tested, using confirmatory factor analysis. The first model is the seven-factor model of the original version of the EDS-R and the second is the model produced by the principal component analysis. The results of confirmatory factor analysis presented the original model (with a seven-factor structure) as a good model and fit indices were good (X(2)/ddl=2.89, Root Mean Square Error of Approximation (RMSEA)=0.061, Expected Cross Validation Index (ECVI)=1.20, Goodness-of-Fit Index (GFI)=0.92, Comparative Fit Index (CFI)=0.94, Standardized Root Mean Square (SRMS)=0.048). These results showed that the French version of EDS-R has an identical factor structure to the original. Therefore, the French version of EDS-R was an acceptable scale to measure exercise dependence and can be used on a French population.

  2. Superconducting quantum circuits at the surface code threshold for fault tolerance.

    PubMed

    Barends, R; Kelly, J; Megrant, A; Veitia, A; Sank, D; Jeffrey, E; White, T C; Mutus, J; Fowler, A G; Campbell, B; Chen, Y; Chen, Z; Chiaro, B; Dunsworth, A; Neill, C; O'Malley, P; Roushan, P; Vainsencher, A; Wenner, J; Korotkov, A N; Cleland, A N; Martinis, John M

    2014-04-24

    A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.

  3. Constraints on a scale-dependent bias from galaxy clustering

    NASA Astrophysics Data System (ADS)

    Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.

    2017-01-01

    We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.

  4. Measurement error associated with surveys of fish abundance in Lake Michigan

    USGS Publications Warehouse

    Krause, Ann E.; Hayes, Daniel B.; Bence, James R.; Madenjian, Charles P.; Stedman, Ralph M.

    2002-01-01

    In fisheries, imprecise measurements in catch data from surveys adds uncertainty to the results of fishery stock assessments. The USGS Great Lakes Science Center (GLSC) began to survey the fall fish community of Lake Michigan in 1962 with bottom trawls. The measurement error was evaluated at the level of individual tows for nine fish species collected in this survey by applying a measurement-error regression model to replicated trawl data. It was found that the estimates of measurement-error variance ranged from 0.37 (deepwater sculpin, Myoxocephalus thompsoni) to 1.23 (alewife, Alosa pseudoharengus) on a logarithmic scale corresponding to a coefficient of variation = 66% to 156%. The estimates appeared to increase with the range of temperature occupied by the fish species. This association may be a result of the variability in the fall thermal structure of the lake. The estimates may also be influenced by other factors, such as pelagic behavior and schooling. Measurement error might be reduced by surveying the fish community during other seasons and/or by using additional technologies, such as acoustics. Measurement-error estimates should be considered when interpreting results of assessments that use abundance information from USGS-GLSC surveys of Lake Michigan and could be used if the survey design was altered. This study is the first to report estimates of measurement-error variance associated with this survey.

  5. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  6. Perceived barriers to medical-error reporting: an exploratory investigation.

    PubMed

    Uribe, Claudia L; Schweikhart, Sharon B; Pathak, Dev S; Dow, Merrell; Marsh, Gail B

    2002-01-01

    Medical-error reporting is an essential component for patient safety enhancement. Unfortunately, medical errors are largely underreported across healthcare institutions. This problem can be attributed to different factors and barriers present at organizational and individual levels that ultimately prevent individuals from generating the report. This study explored the factors that affect medical-error reporting among physicians and nurses at a large academic medical center located in the midwest United States. A nominal group session was conducted to identify the most relevant factors that act as barriers for error reporting. These factors were then used to design a questionnaire that explored the likelihood of the factors to act as barriers and their likelihood to be modified. Using these two parameters, the results were analyzed and combined into a Factor Relevance Matrix. The matrix identifies the factors for which immediate actions should be undertaken to improve medical-error reporting (immediate action factors). It also identifies factors that require long-term strategies (long-term strategy factors) as well as factors that the organization should be aware of but that are of lower priority (awareness factors). The strategies outlined in this study may assist healthcare organizations in improving medical-error reporting, as part of the efforts toward patient-safety enhancement. Although factors affecting medical-error reporting may vary between different organizations, the process used in identifying the factors and the Factor Relevance Matrix developed in this study are easily adaptable to any organizational setting.

  7. Spatio-temporal error growth in the multi-scale Lorenz'96 model

    NASA Astrophysics Data System (ADS)

    Herrera, S.; Fernández, J.; Rodríguez, M. A.; Gutiérrez, J. M.

    2010-07-01

    The influence of multiple spatio-temporal scales on the error growth and predictability of atmospheric flows is analyzed throughout the paper. To this aim, we consider the two-scale Lorenz'96 model and study the interplay of the slow and fast variables on the error growth dynamics. It is shown that when the coupling between slow and fast variables is weak the slow variables dominate the evolution of fluctuations whereas in the case of strong coupling the fast variables impose a non-trivial complex error growth pattern on the slow variables with two different regimes, before and after saturation of fast variables. This complex behavior is analyzed using the recently introduced Mean-Variance Logarithmic (MVL) diagram.

  8. Error-correcting codes on scale-free networks

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Hoon; Ko, Young-Jo

    2004-06-01

    We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.

  9. Conditional Standard Errors, Reliability and Decision Consistency of Performance Levels Using Polytomous IRT.

    ERIC Educational Resources Information Center

    Wang, Tianyou; And Others

    M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…

  10. Relations between Response Trajectories on the Continuous Performance Test and Teacher-Rated Problem Behaviors in Preschoolers

    PubMed Central

    Allan, Darcey M.; Lonigan, Christopher J.

    2014-01-01

    Although both the Continuous Performance Test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (Mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An ADHD-rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across four temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to one type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. PMID:25419645

  11. Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.

    2015-04-01

    The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.

  12. Relations between response trajectories on the continuous performance test and teacher-rated problem behaviors in preschoolers.

    PubMed

    Allan, Darcey M; Lonigan, Christopher J

    2015-06-01

    Although both the continuous performance test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An attention deficit/hyperactivity disorder (ADHD) rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across 4 temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to 1 type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. (c) 2015 APA, all rights reserved).

  13. Trapped Proton Environment in Medium-Earth Orbit (2000-2010)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yue; Friedel, Reinhard Hans; Kippen, Richard Marc

    This report describes the method used to derive fluxes of the trapped proton belt along the GPS orbit (i.e., a Medium-Earth Orbit) during 2000 – 2010, a period almost covering a solar cycle. This method utilizes a newly developed empirical proton radiation-belt model, with the model output scaled by GPS in-situ measurements, to generate proton fluxes that cover a wide range of energies (50keV- 6MeV) and keep temporal features as well. The new proton radiation-belt model is developed based upon CEPPAD proton measurements from the Polar mission (1996 – 2007). Comparing to the de-facto standard empirical model of AP8, thismore » model is not only based upon a new data set representative of the proton belt during the same period covered by GPS, but can also provide statistical information of flux values such as worst cases and occurrence percentiles instead of solely the mean values. The comparison shows quite different results from the two models and suggests that the commonly accepted error factor of 2 on the AP8 flux output over-simplifies and thus underestimates variations of the proton belt. Output fluxes from this new model along the GPS orbit are further scaled by the ns41 in-situ data so as to reflect the dynamic nature of protons in the outer radiation belt at geomagnetically active times. Derived daily proton fluxes along the GPS ns41 orbit, whose data files are delivered along with this report, are depicted to illustrate the trapped proton environment in the Medium-Earth Orbit. Uncertainties on those daily proton fluxes from two sources are evaluated: One is from the new proton-belt model that has error factors < ~3; the other is from the in-situ measurements and the error factors could be ~ 5.« less

  14. Factorial invariance of child self-report across healthy and chronic health condition groups: a confirmatory factor analysis utilizing the PedsQLTM 4.0 Generic Core Scales.

    PubMed

    Limbers, Christine A; Newman, Daniel A; Varni, James W

    2008-07-01

    The objective of the present study was to examine the factorial invariance of the PedsQL 4.0 Generic Core Scales for child self-report across 11,433 children ages 5-18 with chronic health conditions and healthy children. Multigroup Confirmatory Factor Analysis was performed specifying a five-factor model. Two multigroup structural equation models, one with constrained parameters and the other with unconstrained parameters, were proposed in order to compare the factor loadings across children with chronic health conditions and healthy children. Metric invariance (i.e., equal factor loadings) was demonstrated based on stability of the Comparative Fit Index (CFI) between the two models, and several additional indices of practical fit including the root mean squared error of approximation, the Non-normed Fit Index, and the Parsimony Normed Fit Index. The findings support an equivalent five-factor structure on the PedsQL 4.0 Generic Core Scales across healthy and chronic health condition groups. These findings suggest that when differences are found across chronic health condition and healthy groups when utilizing the PedsQL, these differences are more likely real differences in self-perceived health-related quality of life, rather than differences in interpretation of the PedsQL items as a function of health status.

  15. Frequency of pediatric medication administration errors and contributing factors.

    PubMed

    Ozkan, Suzan; Kocaman, Gulseren; Ozturk, Candan; Seren, Seyda

    2011-01-01

    This study examined the frequency of pediatric medication administration errors and contributing factors. This research used the undisguised observation method and Critical Incident Technique. Errors and contributing factors were classified through the Organizational Accident Model. Errors were made in 36.5% of the 2344 doses that were observed. The most frequent errors were those associated with administration at the wrong time. According to the results of this study, errors arise from problems within the system.

  16. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    DOE PAGES

    Locatelli, R.; Bousquet, P.; Chevallier, F.; ...

    2013-10-08

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr -1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr -1 in North America to 7 Tg yr -1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems.« less

  17. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  18. Measuring Scale Errors in a Laser Tracker’s Horizontal Angle Encoder Through Simple Length Measurement and Two-Face System Tests

    PubMed Central

    Muralikrishnan, B.; Blackburn, C.; Sawyer, D.; Phillips, S.; Bridges, R.

    2010-01-01

    We describe a method to estimate the scale errors in the horizontal angle encoder of a laser tracker in this paper. The method does not require expensive instrumentation such as a rotary stage or even a calibrated artifact. An uncalibrated but stable length is realized between two targets mounted on stands that are at tracker height. The tracker measures the distance between these two targets from different azimuthal positions (say, in intervals of 20° over 360°). Each target is measured in both front face and back face. Low order harmonic scale errors can be estimated from this data and may then be used to correct the encoder’s error map to improve the tracker’s angle measurement accuracy. We have demonstrated this for the second order harmonic in this paper. It is important to compensate for even order harmonics as their influence cannot be removed by averaging front face and back face measurements whereas odd orders can be removed by averaging. We tested six trackers from three different manufacturers. Two of those trackers are newer models introduced at the time of writing of this paper. For older trackers from two manufacturers, the length errors in a 7.75 m horizontal length placed 7 m away from a tracker were of the order of ± 65 μm before correcting the error map. They reduced to less than ± 25 μm after correcting the error map for second order scale errors. Newer trackers from the same manufacturers did not show this error. An older tracker from a third manufacturer also did not show this error. PMID:27134789

  19. Structure and dating errors in the geologic time scale and periodicity in mass extinctions

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1989-01-01

    Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.

  20. Novel conformal technique to reduce staircasing artifacts at material boundaries for FDTD modeling of the bioheat equation.

    PubMed

    Neufeld, E; Chavannes, N; Samaras, T; Kuster, N

    2007-08-07

    The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.

  1. Development of the Psychiatric Nurse Job Stressor Scale (PNJSS).

    PubMed

    Yada, Hironori; Abe, Hiroshi; Funakoshi, Yayoi; Omori, Hisamitsu; Matsuo, Hisae; Ishida, Yasushi; Katoh, Takahiko

    2011-10-01

    The aim of the present study was to develop a tool, the Psychiatric Nurse Job Stressor Scale (PNJSS), for measuring the stress of psychiatric nurses, and to evaluate the reliability and validity of the PNJSS. A total of 302 psychiatric nurses completed all the questions in an early version of the PNJSS, which was composed of 63 items and is based on past literature of psychiatric nurses' stress. A total of 22 items from four factors, 'Psychiatric Nursing Ability', 'Attitude of Patients', 'Attitude Toward Nursing' and 'Communication', were extracted in exploratory factor analysis. With regard to scale reliability, the item-scale correlation coefficient was r = 0.265-0.570 (P < 0.01), the Cronbach alpha coefficient was 0.675-0.869, and the test-retest correlation coefficient was r = 0.439-0.771 (P < 0.01). With regard to scale validity, the convergent validity of the 'job stressor' scale was r = 0.172-0.420 (P < 0.01), and the predictive validity of the 'job reaction' scale was r = 0.201-0.453 (P < 0.01). The compatibility of the factor model to the data was 1.750 (χ(2) /d.f., 343.189/196, P < 0.01), the goodness of fit index was 0.910, the adjusted goodness of fit index was 0.883, the comparative fit index was 0.924, and the root mean square error of approximation was 0.050. The PNJSS has sufficient reliability and validity as a four-factor structure containing 22 items, and is valid as a tool for evaluating psychiatric nurse job stressors. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.

  2. Psychometric Properties and Correlates of the Beck Hopelessness Scale in Family Caregivers of Nigerian Patients with Psychiatric Disorders in Southwestern Nigeria

    PubMed Central

    Aloba, Olutayo; Ajao, Olayinka; Alimi, Taiwo; Esan, Olufemi

    2016-01-01

    Objectives: To examine the construct and correlates of hopelessness among family caregivers of Nigerian psychiatric patients. Materials and Methods: This is a cross-sectional, descriptive study involving 264 family caregiver-patients’ dyads recruited from two university teaching hospitals psychiatric clinics in Southwestern Nigeria. Results: Exploratory factor analysis revealed a two-factor 9-item model of the Beck Hopelessness Scale (BHS) among the family caregivers. Confirmatory factor analysis of the model revealed satisfactory indices of fitness (goodness of fit index = 0.97, comparative fit index = 0.96, Chi-square/degree of freedom (CMIN/DF) = 1.60, root mean square error of approximation = 0.048, expected cross-validation index = 0.307, and standardized root mean residual = 0.005). Reliability of the scale was modestly satisfactory (Cronbach's alpha 0.72). Construct validity of scale was supported by significant correlations with the family caregivers’ scores on the Zarit Burden Interview, mini international neuropsychiatric interview suicidality module, General Health Questionnaire-12 (GHQ-12), and Patient Health Questionnaire-9. The greatest variance in the family caregivers’ scores on the BHS was contributed by their scores on the psychological distress scale (GHQ-12). Conclusions: The BHS has adequate psychometric properties among Nigerian psychiatric patients’ family caregivers. There is the need to pay attention to the psychological well-being of the family caregivers of Nigerian psychiatric patients. PMID:28163498

  3. The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.

    The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less

  4. A Measure of Barriers Toward Medical Disclosure Among Health Professionals in the United Arab Emirates.

    PubMed

    Zaghloul, Ashraf Ahmad; Elsergany, Moetaz; Mosallam, Rasha

    2018-03-01

    There has been a growing awareness that patients are subject to injuries that can be prevented as a direct consequence of health care. Error disclosure is an effective technique to restore the lost trust with the health care system. The current study aimed to develop a valid and reliable scale to determine the factors facilitating the disclosure of health professionals in health organizations. This study had a cross-sectional design that consisted of 722 responses (response rate of 68.3%) from 1 private and 1 public hospital in Sharjah, United Arab Emirates. The data collection tool included 23 items rated on a Likert scale ranging from 5, strongly agree, to 1, strongly disagree.The internal consistency was established through calculating the split-half reliability for part 1 (12 items), which had a Cronbach coefficient of 0.65, and part 2 (11 items), which had a Cronbach coefficient of 0.62. Scale validity was assessed with the Kaiser-Meyer-Olkin measure of sampling adequacy, which had a value of 0.62, and the Bartlett test of sphericity (approximated χ = 13012.2, P = 0.0001) supported the factorability of the correlation matrix. The varimax rotation revealed 5 components that explained 77.8% of the total variance. The varimax rotation revealed 21 items loaded on the following 5 factors: fear of disclosure and provider image consequences (factor 1), apology (factor 2), organizational culture toward patient safety (factor 3), professional ethics and transparency (factor 4), as well as patient and provider education (factor 5). The disclosure of medical mistakes requires preliminary considerations to effectively and compassionately disclose these events to patients. The validity and reliability of the results support the use of this scale at hospitals as part of the health care providers' disclosure processes.

  5. MICRO-SCALE CFD MODELING OF OSCILLATING FLOW IN A REGENERATOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheadle, M. J.; Nellis, G. F.; Klein, S. A.

    2010-04-09

    Regenerator models used by designers are macro-scale models that do not explicitly consider interactions between the fluid and the solid matrix. Rather, the heat transfer coefficient and pressure drop are calculated using correlations for Nusselt number and friction factor. These correlations are typically based on steady flow data. The error associated with using steady flow correlations to characterize the oscillatory flow that is actually present in the regenerator is not well understood. Oscillating flow correlations based on experimental data do exist in the literature; however, these results are often conflicting. This paper uses a micro-scale computational fluid dynamic (CFD) modelmore » of a unit-cell of a regenerator matrix to determine the conditions for which oscillating flow affects friction factor. These conditions are compared to those found in typical pulse tube regenerators to determine whether oscillatory flow is of practical importance. CFD results clearly show a transition Valensi number beyond which oscillating flow significantly increases the friction factor. This transition Valensi number increases with Reynolds number. Most practical pulse tube regenerators will operate below this Valensi transition number and therefore this study suggests that the effect of flow oscillation on pressure drop can be neglected in macro-scale regenerator models.« less

  6. Factor Rotation and Standard Errors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.

    2015-01-01

    In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

  7. A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R

    The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

  8. Perceptions and Efficacy of Flight Operational Quality Assurance (FOQA) Programs Among Small-scale Operators

    DTIC Science & Technology

    2012-01-01

    regressive Integrated Moving Average ( ARIMA ) model for the data, eliminating the need to identify an appropriate model through trial and error alone...06 .11 13.67 16 .62 16 .14 .11 8.06 16 .95 * Based on the asymptotic chi-square approximation. 8 In general, ARIMA models address three...performance standards and measurement processes and a prevailing climate of organizational trust were important factors. Unfortunately, uneven

  9. Signals of Opportunity Navigation Using Wi-Fi Signals

    DTIC Science & Technology

    2011-03-24

    Identifier . . . . . . . . . . . . . . . . . . . . . . . 54 MVM Mean Value Method . . . . . . . . . . . . . . . . . . . . . 60 SDM Scaled Differential...the mean value ( MVM ) and scaled differential (SDM) methods. An error was logged if the UI 60 correlation algorithm identified a packet index that did...Notable from this graph is that a window of 50 packets appears to provide zero errors for MVM and near zero errors for SDM. Also notable is that a

  10. The artificial pancreas: evaluating risk of hypoglycaemia following errors that can be expected with prolonged at-home use.

    PubMed

    Wolpert, H; Kavanagh, M; Atakov-Castillo, A; Steil, G M

    2016-02-01

    Artificial pancreas systems show benefit in closely monitored at-home studies, but may not have sufficient power to assess safety during infrequent, but expected, system or user errors. The aim of this study was to assess the safety of an artificial pancreas system emulating the β-cell when the glucose value used for control is improperly calibrated and participants forget to administer pre-meal insulin boluses. Artificial pancreas control was performed in a clinic research centre on three separate occasions each lasting from 10 p.m. to 2 p.m. Sensor glucose values normally used for artificial pancreas control were replaced with scaled blood glucose values calculated to be 20% lower than, equal to or 33% higher than the true blood glucose. Safe control was defined as blood glucose between 3.9 and 8.3 mmol/l. Artificial pancreas control resulted in fasting scaled blood glucose values not different from target (6.67 mmol/l) at any scaling factor. Meal control with scaled blood glucose 33% higher than blood glucose resulted in supplemental carbohydrate to prevent hypoglycaemia in four of six participants during breakfast, and one participant during the night. In all instances, scaled blood glucose reported blood glucose as safe. Outpatient trials evaluating artificial pancreas performance based on sensor glucose may not detect hypoglycaemia when sensor glucose reads higher than blood glucose. Because these errors are expected to occur, in-hospital artificial pancreas studies using supplemental carbohydrate in anticipation of hypoglycaemia, which allow safety to be assessed in a controlled non-significant environment should be considered as an alternative. Inpatient studies provide a definitive alternative to model-based computer simulations and can be conducted in parallel with closely monitored outpatient artificial pancreas studies used to assess benefit. © 2015 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  11. Colorimetric characterization of digital cameras with unrestricted capture settings applicable for different illumination circumstances

    NASA Astrophysics Data System (ADS)

    Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin

    2016-05-01

    With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.

  12. Characterization of a laboratory model of computer mouse use - applications for studying risk factors for musculoskeletal disorders.

    PubMed

    Flodgren, G; Heiden, M; Lyskov, E; Crenshaw, A G

    2007-03-01

    In the present study, we assessed the wrist kinetics (range of motion, mean position, velocity and mean power frequency in radial/ulnar deviation, flexion/extension, and pronation/supination) associated with performing a mouse-operated computerized task involving painting rectangles on a computer screen. Furthermore, we evaluated the effects of the painting task on subjective perception of fatigue and wrist position sense. The results showed that the painting task required constrained wrist movements, and repetitive movements of about the same magnitude as those performed in mouse-operated design tasks. In addition, the painting task induced a perception of muscle fatigue in the upper extremity (Borg CR-scale: 3.5, p<0.001) and caused a reduction in the position sense accuracy of the wrist (error before: 4.6 degrees , error after: 5.6 degrees , p<0.05). This standardized painting task appears suitable for studying relevant risk factors, and therefore it offers a potential for investigating the pathophysiological mechanisms behind musculoskeletal disorders related to computer mouse use.

  13. Entanglement renormalization, quantum error correction, and bulk causality

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.; Kastoryano, Michael J.

    2017-04-01

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  14. Evaluating a medical error taxonomy.

    PubMed

    Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.

  15. [Comparison of three stand-level biomass estimation methods].

    PubMed

    Dong, Li Hu; Li, Feng Ri

    2016-12-01

    At present, the forest biomass methods of regional scale attract most of attention of the researchers, and developing the stand-level biomass model is popular. Based on the forestry inventory data of larch plantation (Larix olgensis) in Jilin Province, we used non-linear seemly unrelated regression (NSUR) to estimate the parameters in two additive system of stand-level biomass equations, i.e., stand-level biomass equations including the stand variables and stand biomass equations including the biomass expansion factor (i.e., Model system 1 and Model system 2), listed the constant biomass expansion factor for larch plantation and compared the prediction accuracy of three stand-level biomass estimation methods. The results indicated that for two additive system of biomass equations, the adjusted coefficient of determination (R a 2 ) of the total and stem equations was more than 0.95, the root mean squared error (RMSE), the mean prediction error (MPE) and the mean absolute error (MAE) were smaller. The branch and foliage biomass equations were worse than total and stem biomass equations, and the adjusted coefficient of determination (R a 2 ) was less than 0.95. The prediction accuracy of a constant biomass expansion factor was relatively lower than the prediction accuracy of Model system 1 and Model system 2. Overall, although stand-level biomass equation including the biomass expansion factor belonged to the volume-derived biomass estimation method, and was different from the stand biomass equations including stand variables in essence, but the obtained prediction accuracy of the two methods was similar. The constant biomass expansion factor had the lower prediction accuracy, and was inappropriate. In addition, in order to make the model parameter estimation more effective, the established stand-level biomass equations should consider the additivity in a system of all tree component biomass and total biomass equations.

  16. Risk Factors for Increased Severity of Paediatric Medication Administration Errors

    PubMed Central

    Sears, Kim; Goodman, William M.

    2012-01-01

    Patients' risks from medication errors are widely acknowledged. Yet not all errors, if they occur, have the same risks for severe consequences. Facing resource constraints, policy makers could prioritize factors having the greatest severe–outcome risks. This study assists such prioritization by identifying work-related risk factors most clearly associated with more severe consequences. Data from three Canadian paediatric centres were collected, without identifiers, on actual or potential errors that occurred. Three hundred seventy-two errors were reported, with outcome severities ranging from time delays up to fatalities. Four factors correlated significantly with increased risk for more severe outcomes: insufficient training; overtime; precepting a student; and off-service patient. Factors' impacts on severity also vary with error class: for wrong-time errors, the factors precepting a student or working overtime significantly increase severe-outcomes risk. For other types, caring for an off-service patient has greatest severity risk. To expand such research, better standardization is needed for categorizing outcome severities. PMID:23968607

  17. A propensity score approach to correction for bias due to population stratification using genetic and non-genetic factors.

    PubMed

    Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita

    2009-12-01

    Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.

  18. Mismeasurement and the resonance of strong confounders: correlated errors.

    PubMed

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  19. The contributions of human factors on human error in Malaysia aviation maintenance industries

    NASA Astrophysics Data System (ADS)

    Padil, H.; Said, M. N.; Azizan, A.

    2018-05-01

    Aviation maintenance is a multitasking activity in which individuals perform varied tasks under constant pressure to meet deadlines as well as challenging work conditions. These situational characteristics combined with human factors can lead to various types of human related errors. The primary objective of this research is to develop a structural relationship model that incorporates human factors, organizational factors, and their impact on human errors in aviation maintenance. Towards that end, a questionnaire was developed which was administered to Malaysian aviation maintenance professionals. Structural Equation Modelling (SEM) approach was used in this study utilizing AMOS software. Results showed that there were a significant relationship of human factors on human errors and were tested in the model. Human factors had a partial effect on organizational factors while organizational factors had a direct and positive impact on human errors. It was also revealed that organizational factors contributed to human errors when coupled with human factors construct. This study has contributed to the advancement of knowledge on human factors effecting safety and has provided guidelines for improving human factors performance relating to aviation maintenance activities and could be used as a reference for improving safety performance in the Malaysian aviation maintenance companies.

  20. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    NASA Astrophysics Data System (ADS)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.

  1. A map overlay error model based on boundary geometry

    USGS Publications Warehouse

    Gaeuman, D.; Symanzik, J.; Schmidt, J.C.

    2005-01-01

    An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.

  2. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  3. Construct validity evidence for the Male Role Norms Inventory-Short Form: A structural equation modeling approach using the bifactor model.

    PubMed

    Levant, Ronald F; Hall, Rosalie J; Weigold, Ingrid K; McCurdy, Eric R

    2016-10-01

    The construct validity of the Male Role Norms Inventory-Short Form (MRNI-SF) was assessed using a latent variable approach implemented with structural equation modeling (SEM). The MRNI-SF was specified as having a bifactor structure, and validation scales were also specified as latent variables. The latent variable approach had the advantages of separating effects of general and specific factors and controlling for some sources of measurement error. Data (N = 484) were from a diverse sample (38.8% men of color, 22.3% men of diverse sexualities) of community-dwelling and college men who responded to an online survey. The construct validity of the MRNI-SF General Traditional Masculinity Ideology factor was supported for all 4 of the proposed latent correlations with: (a) Male Role Attitudes Scale; (b) general factor of Conformity to Masculine Norms Inventory-46; (c) higher-order factor of Gender Role Conflict Scale; and (d) Personal Attributes Questionnaire-Masculinity Scale. Significant correlations with relevant other latent factors provided concurrent validity evidence for the MRNI-SF specific factors of Negativity toward Sexual Minorities, Importance of Sex, Restrictive Emotionality, and Toughness, with all 8 of the hypothesized relationships supported. However, 3 relationships concerning Dominance were not supported. (The construct validity of the remaining 2 MRNI-SF specific factors-Avoidance of Femininity and Self-Reliance through Mechanical Skills was not assessed.) Comparisons were made, and meaningful differences noted, between the latent correlations emphasized in this study and their raw variable counterparts. Results are discussed in terms of the advantages of an SEM approach and the unique characteristics of the bifactor model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Defining distinct negative beliefs about uncertainty: validating the factor structure of the Intolerance of Uncertainty Scale.

    PubMed

    Sexton, Kathryn A; Dugas, Michel J

    2009-06-01

    This study examined the factor structure of the English version of the Intolerance of Uncertainty Scale (IUS; French version: M. H. Freeston, J. Rhéaume, H. Letarte, M. J. Dugas, & R. Ladouceur, 1994; English version: K. Buhr & M. J. Dugas, 2002) using a substantially larger sample than has been used in previous studies. Nonclinical undergraduate students and adults from the community (M age = 23.74 years, SD = 6.36; 73.0% female and 27.0% male) who participated in 16 studies in the Anxiety Disorders Laboratory at Concordia University in Montreal, Canada were randomly assigned to 2 datasets. Exploratory factor analysis with the 1st sample (n = 1,230) identified 2 factors: the beliefs that "uncertainty has negative behavioral and self-referent implications" and that "uncertainty is unfair and spoils everything." This 2-factor structure provided a good fit to the data (Bentler-Bonett normed fit index = .96, comparative fit index = .97, standardized root-mean residual = .05, root-mean-square error of approximation = .07) upon confirmatory factor analysis with the 2nd sample (n = 1,221). Both factors showed similarly high correlations with pathological worry, and Factor 1 showed stronger correlations with generalized anxiety disorder analogue status, trait anxiety, somatic anxiety, and depressive symptomatology. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  5. Obsessive-compulsive symptoms in a normative Chinese sample of youth: prevalence, symptom dimensions, and factor structure of the Leyton Obsessional Inventory--Child Version.

    PubMed

    Sun, Jing; Boschen, Mark J; Farrell, Lara J; Buys, Nicholas; Li, Zhan-Jiang

    2014-08-01

    Chinese adolescents face life stresses from multiple sources, with higher levels of stress predictive of adolescent mental health outcomes, including in the area of obsessive-compulsive disorders (OCD). Valid assessment of OCD among this age group is therefore a critical need in China. This study aims to standardise the Chinese version of the Leyton short version scale for adolescents of secondary schools in order to assess this condition. Stratified randomly selected adolescents were selected from four high schools located in Beijing, China. The Chinese version of the Leyton scale was administered to 3221 secondary school students aged between 12 and 18 years. A high response rate was achieved, with 3185 adolescents responding to the survey (98.5 percent). Exploratory factor analysis (EFA) extracted four factors from the scale: compulsive thoughts, concerns of cleanliness, lucky number, repetitiveness and repeated checking. The four-factor structures were confirmed using Confirmatory Factor Analysis (CFA). Overall the four-factor structure had a good model fit and high levels of reliability for each individual dimension and reasonable content validity. Invariance analyses in unconstrained, factor loading, and error variance models demonstrated that the Leyton scale is invariant in relation to the presence or absence OCD, age and gender. Discriminant validity analysis demonstrated that the four-factor structure scale also had excellent ability to differentiate between OCD and non-OCD students, male and female students, and age groups. The dataset was a non-clinical sample of high school students, rather than a sample of individuals with OCD. Future research may examine symptom structure in clinical populations to assess whether this structure fits into both clinical and community population. The structure derived from the Leyton short version scale in a non-clinical secondary school sample of adolescents, suggests that a four-factor solution can be utilised as a screening tool to assess adolescents׳ psychopathological symptoms in the area of OCD in mainland Chinese non-clinical secondary school students. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Seasonal Differences in Spatial Scales of Chlorophyll-A Concentration in Lake TAIHU,CHINA

    NASA Astrophysics Data System (ADS)

    Bao, Y.; Tian, Q.; Sun, S.; Wei, H.; Tian, J.

    2012-08-01

    Spatial distribution of chlorophyll-a (chla) concentration in Lake Taihu is non-uniform and seasonal variability. Chla concentration retrieval algorithms were separately established using measured data and remote sensing images (HJ-1 CCD and MODIS data) in October 2010, March 2011, and September 2011. Then parameters of semi- variance were calculated on the scale of 30m, 250m and 500m for analyzing spatial heterogeneity in different seasons. Finally, based on the definitions of Lumped chla (chlaL) and Distributed chla (chlaD), seasonal model of chla concentration scale error was built. The results indicated that: spatial distribution of chla concentration in spring was more uniform. In summer and autumn, chla concentration in the north of the lake such as Meiliang Bay and Zhushan Bay was higher than that in the south of Lake Taihu. Chla concentration on different scales showed the similar structure in the same season, while it had different structure in different seasons. And inversion chla concentration from MODIS 500m had a greater scale error. The spatial scale error changed with seasons. It was higher in summer and autumn than that in spring. The maximum relative error can achieve 23%.

  7. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  8. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  9. Development of a Precise Polarization Modulator for UV Spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Ishikawa, S.; Shimizu, T.; Kano, R.; Bando, T.; Ishikawa, R.; Giono, G.; Tsuneta, S.; Nakayama, S.; Tajima, T.

    2015-10-01

    We developed a polarization modulation unit (PMU) to rotate a waveplate continuously in order to observe solar magnetic fields by spectropolarimetry. The non-uniformity of the PMU rotation may cause errors in the measurement of the degree of linear polarization (scale error) and its angle (crosstalk between Stokes-Q and -U), although it does not cause an artificial linear polarization signal (spurious polarization). We rotated a waveplate with the PMU to obtain a polarization modulation curve and estimated the scale error and crosstalk caused by the rotation non-uniformity. The estimated scale error and crosstalk were {<} 0.01 % for both. This PMU will be used as a waveplate motor for the Chromospheric Lyman-Alpha SpectroPolarimeter (CLASP) rocket experiment. We confirm that the PMU performs and functions sufficiently well for CLASP.

  10. Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems

    PubMed Central

    Li, Zhining; Zhang, Yingtang; Yin, Gang

    2018-01-01

    The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544

  11. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations: Beam Maps and Window Functions

    NASA Technical Reports Server (NTRS)

    Hill, R.S.; Weiland, J.L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C.L.; Halpern, M.; Kogut, A.; Page, L.; hide

    2008-01-01

    Cosmology and other scientific results from the WMAP mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of approximately 2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of approximately 1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of approximately 2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly. errors in the measured disk temperature are approximately 0.5%.

  12. Nurses' systems thinking competency, medical error reporting, and the occurrence of adverse events: a cross-sectional study.

    PubMed

    Hwang, Jee-In; Park, Hyeoun-Ae

    2017-12-01

    Healthcare professionals' systems thinking is emphasized for patient safety. To report nurses' systems thinking competency, and its relationship with medical error reporting and the occurrence of adverse events. A cross-sectional survey using a previously validated Systems Thinking Scale (STS), was conducted. Nurses from two teaching hospitals were invited to participate in the survey. There were 407 (60.3%) completed surveys. The mean STS score was 54.5 (SD 7.3) out of 80. Nurses with higher STS scores were more likely to report medical errors (odds ratio (OR) = 1.05; 95% confidence interval (CI) = 1.02-1.08) and were less likely to be involved in the occurrence of adverse events (OR = 0.96; 95% CI = 0.93-0.98). Nurses showed moderate systems thinking competency. Systems thinking was a significant factor associated with patient safety. Impact Statement: The findings of this study highlight the importance of enhancing nurses' systems thinking capacity to promote patient safety.

  13. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Medication errors of nurses and factors in refusal to report medication errors among nurses in a teaching medical center of iran in 2012.

    PubMed

    Mostafaei, Davoud; Barati Marnani, Ahmad; Mosavi Esfahani, Haleh; Estebsari, Fatemeh; Shahzaidi, Shiva; Jamshidi, Ensiyeh; Aghamiri, Seyed Samad

    2014-10-01

    About one third of unwanted reported medication consequences are due to medication errors, resulting in one-fifth of hospital injuries. The aim of this study was determined formal and informal medication errors of nurses and the level of importance of factors in refusal to report medication errors among nurses. The cross-sectional study was done on the nursing staff of Shohada Tajrish Hospital, Tehran, Iran in 2012. The data was gathered through a questionnaire, made by the researchers. The questionnaires' face and content validity was confirmed by experts and for measuring its reliability test-retest was used. The data was analyzed by descriptive statistics. We used SPSS for related statistical analyses. The most important factors in refusal to report medication errors respectively were: lack of medication error recording and reporting system in the hospital (3.3%), non-significant error reporting to hospital authorities and lack of appropriate feedback (3.1%), and lack of a clear definition for a medication error (3%). There were both formal and informal reporting of medication errors in this study. Factors pertaining to management in hospitals as well as the fear of the consequences of reporting are two broad fields among the factors that make nurses not report their medication errors. In this regard, providing enough education to nurses, boosting the job security for nurses, management support and revising related processes and definitions are some factors that can help decreasing medication errors and increasing their report in case of occurrence.

  15. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy.

    PubMed

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-09-18

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms.

  16. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy

    PubMed Central

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-01-01

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms. PMID:26393606

  17. A stochastic method for computing hadronic matrix elements

    DOE PAGES

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  18. Quantitative Estimation of Land Surface Characteristic Parameters and Actual Evapotranspiration in the Nagqu River Basin over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Zhong, L.; Ma, Y.; Ma, W.; Zou, M.; Hu, Y.

    2016-12-01

    Actual evapotranspiration (ETa) is an important component of the water cycle in the Tibetan Plateau. It is controlled by many hydrological and meteorological factors. Therefore, it is of great significance to estimate ETa accurately and continuously. It is also drawing much attention of scientific community to understand land surface parameters and land-atmosphere water exchange processes in small watershed-scale areas. Based on in-situ meteorological data in the Nagqu river basin and surrounding regions, the main meteorological factors affecting the evaporation process were quantitatively analyzed and the point-scale ETa estimation models in the study area were successfully built. On the other hand, multi-source satellite data (such as SPOT, MODIS, FY-2C) were used to derive the surface characteristics in the river basin. A time series processing technique was applied to remove cloud cover and reconstruct data series. Then improved land surface albedo, improved downward shortwave radiation flux and reconstructed normalized difference vegetation index (NDVI) were coupled into the topographical enhanced surface energy balance system to estimate ETa. The model-estimated results were compared with those ETa values determined by combinatory method. The results indicated that the model-estimated ETa agreed well with in-situ measurements with correlation coefficient, mean bias error and root mean square error of 0.836, 0.087 and 0.140 mm/h respectively.

  19. The luminosity function of the CfA Redshift Survey

    NASA Technical Reports Server (NTRS)

    Marzke, R. O.; Huchra, J. P.; Geller, M. J.

    1994-01-01

    We use the CfA Reshift Survey of galaxies with m(sub z) less than or equal to 15.5 to calculate the galaxy luminosity function over the range -13 less than or equal to M(sub z) less than or equal to -22. The sample includes 9063 galaxies distributed over 2.1 sr. For galaxies with velocities cz greater or equal to 2500 km per sec, where the effects of peculiar velocities are small, the luminosity function is well represented by a Schechter function with parameters phi(sub star) = 0.04 +/- 0.01 per cu Mpc, M(sub star) = -18.8 +/- 0.3, and alpha = -1.0 +/- 0.2. When we include all galaxies with cz greater or equal to 500 km per sec, the number of galaxies in the range -16 less than or equal to M(sub z) less than or equal to -13 exceeds the extrapolation of the Schechter function by a factor of 3.1 +/- 0.5. This faint-end excess is not caused by the local peculiar velocity field but may be partially explained by small scale errors in the Zwicky magnitudes. Even with a scale error as large as 0.2 mag per mag, which is unlikely, the excess is still a factor of 1.8 +/- 0.3. If real, this excess affects the interpretation of deep counts of field galaxies.

  20. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  1. A Physiologically Based Pharmacokinetic Model to Predict the Pharmacokinetics of Highly Protein-Bound Drugs and Impact of Errors in Plasma Protein Binding

    PubMed Central

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2015-01-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data was often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding, and blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for terminal elimination half-life (t1/2, 100% of drugs), peak plasma concentration (Cmax, 100%), area under the plasma concentration-time curve (AUC0–t, 95.4%), clearance (CLh, 95.4%), mean retention time (MRT, 95.4%), and steady state volume (Vss, 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. PMID:26531057

  2. A vignette study to examine health care professionals' attitudes towards patient involvement in error prevention.

    PubMed

    Schwappach, David L B; Frank, Olga; Davis, Rachel E

    2013-10-01

    Various authorities recommend the participation of patients in promoting patient safety, but little is known about health care professionals' (HCPs') attitudes towards patients' involvement in safety-related behaviours. To investigate how HCPs evaluate patients' behaviours and HCP responses to patient involvement in the behaviour, relative to different aspects of the patient, the involved HCP and the potential error. Cross-sectional fractional factorial survey with seven factors embedded in two error scenarios (missed hand hygiene, medication error). Each survey included two randomized vignettes that described the potential error, a patient's reaction to that error and the HCP response to the patient. Twelve hospitals in Switzerland. A total of 1141 HCPs (response rate 45%). Approval of patients' behaviour, HCP response to the patient, anticipated effects on the patient-HCP relationship, HCPs' support for being asked the question, affective response to the vignettes. Outcomes were measured on 7-point scales. Approval of patients' safety-related interventions was generally high and largely affected by patients' behaviour and correct identification of error. Anticipated effects on the patient-HCP relationship were much less positive, little correlated with approval of patients' behaviour and were mainly determined by the HCP response to intervening patients. HCPs expressed more favourable attitudes towards patients intervening about a medication error than about hand sanitation. This study provides the first insights into predictors of HCPs' attitudes towards patient engagement in safety. Future research is however required to assess the generalizability of the findings into practice before training can be designed to address critical issues. © 2012 John Wiley & Sons Ltd.

  3. Isobaric Reconstruction of the Baryonic Acoustic Oscillation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Yu, Hao-Ran; Zhu, Hong-Ming; Yu, Yu; Pan, Qiaoyin; Pen, Ue-Li

    2017-06-01

    In this Letter, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the nonlinear matter density field. Assuming only the longitudinal component of the displacement being cosmologically relevant, this algorithm iteratively solves the coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the nonlinear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent and is caused only by the emergence of the transverse component after the shell-crossing. As it circumvents the strongest nonlinearity of the density evolution, the reconstructed field is well described by linear theory and immune from the bulk-flow smearing of the BAO signature. Therefore, this algorithm could significantly improve the measurement accuracy of the sound horizon scale s. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error {{Δ }}s/s is reduced by a factor of ˜2.7, very close to the ideal limit with the linear power spectrum and Gaussian covariance matrix.

  4. Implementation of a flow-dependent background error correlation length scale formulation in the NEMOVAR OSTIA system

    NASA Astrophysics Data System (ADS)

    Fiedler, Emma; Mao, Chongyuan; Good, Simon; Waters, Jennifer; Martin, Matthew

    2017-04-01

    OSTIA is the Met Office's Operational Sea Surface Temperature (SST) and Ice Analysis system, which produces L4 (globally complete, gridded) analyses on a daily basis. Work is currently being undertaken to replace the original OI (Optimal Interpolation) data assimilation scheme with NEMOVAR, a 3D-Var data assimilation method developed for use with the NEMO ocean model. A dual background error correlation length scale formulation is used for SST in OSTIA, as implemented in NEMOVAR. Short and long length scales are combined according to the ratio of the decomposition of the background error variances into short and long spatial correlations. The pre-defined background error variances vary spatially and seasonally, but not on shorter time-scales. If the derived length scales applied to the daily analysis are too long, SST features may be smoothed out. Therefore a flow-dependent component to determining the effective length scale has also been developed. The total horizontal gradient of the background SST field is used to identify regions where the length scale should be shortened. These methods together have led to an improvement in the resolution of SST features compared to the previous OI analysis system, without the introduction of spurious noise. This presentation will show validation results for feature resolution in OSTIA using the OI scheme, the dual length scale NEMOVAR scheme, and the flow-dependent implementation.

  5. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  6. Simulation of wave propagation in three-dimensional random media

    NASA Astrophysics Data System (ADS)

    Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1995-04-01

    Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of

  7. On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.

    2013-12-01

    Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.

  8. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  9. Dimensionality and measurement invariance in the Satisfaction with Life Scale in Norway.

    PubMed

    Clench-Aas, Jocelyne; Nes, Ragnhild Bang; Dalgard, Odd Steffen; Aarø, Leif Edvard

    2011-10-01

    Results from previous studies examining the dimensionality and factorial invariance of the Satisfaction with Life Scale (SWLS) are inconsistent and often based on small samples. This study examines the factorial structure and factorial invariance of the SWLS in a Norwegian sample. Confirmatory factor analysis (AMOS) was conducted to explore dimensionality and test for measurement invariance in factor structure, factor loadings, intercepts, and residual variance across gender and four age groups in a large (N = 4,984), nationally representative sample of Norwegian men and women (15-79 years). The data supported a modified unidimensional structure. Factor loadings could be constrained to equality between the sexes, indicating metric invariance between genders. Further testing indicated invariance also at the strong and strict levels, thus allowing analyses involving group means. The SWLS was shown to be sensitive to age, however, at the strong and strict levels of invariance testing. In conclusion, the results in this Norwegian study seem to confirm that a unidimensional structure is acceptable, but that a modified single-factor model with correlations between error terms of items 4 and 5 is preferred. Additionally, comparisons may be made between the genders. Caution must be exerted when comparing age groups.

  10. For how long can we predict the weather? - Insights into atmospheric predictability from global convection-allowing simulations

    NASA Astrophysics Data System (ADS)

    Judt, Falko

    2017-04-01

    A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.

  11. Small Scale Mass Flow Plug Calibration

    NASA Technical Reports Server (NTRS)

    Sasson, Jonathan

    2015-01-01

    A simple control volume model has been developed to calculate the discharge coefficient through a mass flow plug (MFP) and validated with a calibration experiment. The maximum error of the model in the operating region of the MFP is 0.54%. The model uses the MFP geometry and operating pressure and temperature to couple continuity, momentum, energy, an equation of state, and wall shear. Effects of boundary layer growth and the reduction in cross-sectional flow area are calculated using an in- integral method. A CFD calibration is shown to be of lower accuracy with a maximum error of 1.35%, and slower by a factor of 100. Effects of total pressure distortion are taken into account in the experiment. Distortion creates a loss in flow rate and can be characterized by two different distortion descriptors.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  13. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  14. The Specific Level of Functioning Scale: construct validity, internal consistency and factor structure in a large Italian sample of people with schizophrenia living in the community.

    PubMed

    Mucci, Armida; Rucci, Paola; Rocca, Paola; Bucci, Paola; Gibertoni, Dino; Merlotti, Eleonora; Galderisi, Silvana; Maj, Mario

    2014-10-01

    The study aimed to assess the construct validity, internal consistency and factor structure of the Specific Levels of Functioning Scale (SLOF), a multidimensional instrument assessing real life functioning. The study was carried out in 895 Italian people with schizophrenia, all living in the community and attending the outpatient units of 26 university psychiatric clinics and/or community mental health departments. The construct validity of the SLOF was analyzed by means of the multitrait-multimethod approach, using the Personal and Social Performance (PSP) Scale as the gold standard. The factor structure of the SLOF was examined using both an exploratory principal component analysis and a confirmatory factor analysis. The six factors identified using exploratory principal component analysis explained 57.1% of the item variance. The examination of the multitrait-multimethod matrix revealed that the SLOF factors had high correlations with PSP factors measuring the same constructs and low correlations with PSP factors measuring different constructs. The confirmatory factor analysis (CFA) corroborated the 6-factor structure reported in the original validation study. Loadings were all significant and ranged from a minimum of 0.299 to a maximum of 0.803. The CFA model was adequately powered and had satisfactory goodness of fit indices (comparative fit index=0.927, Tucker-Lewis index=0.920 and root mean square error of approximation=0.047, 95% CI 0.045-0.049). The present study confirms, in a large sample of Italian people with schizophrenia living in the community, that the SLOF is a reliable and valid instrument for the assessment of social functioning. It has good construct validity and internal consistency, and a well-defined factor structure. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Modeling aboveground tree woody biomass using national-scale allometric methods and airborne lidar

    NASA Astrophysics Data System (ADS)

    Chen, Qi

    2015-08-01

    Estimating tree aboveground biomass (AGB) and carbon (C) stocks using remote sensing is a critical component for understanding the global C cycle and mitigating climate change. However, the importance of allometry for remote sensing of AGB has not been recognized until recently. The overarching goals of this study are to understand the differences and relationships among three national-scale allometric methods (CRM, Jenkins, and the regional models) of the Forest Inventory and Analysis (FIA) program in the U.S. and to examine the impacts of using alternative allometry on the fitting statistics of remote sensing-based woody AGB models. Airborne lidar data from three study sites in the Pacific Northwest, USA were used to predict woody AGB estimated from the different allometric methods. It was found that the CRM and Jenkins estimates of woody AGB are related via the CRM adjustment factor. In terms of lidar-biomass modeling, CRM had the smallest model errors, while the Jenkins method had the largest ones and the regional method was between. The best model fitting from CRM is attributed to its inclusion of tree height in calculating merchantable stem volume and the strong dependence of non-merchantable stem biomass on merchantable stem biomass. This study also argues that it is important to characterize the allometric model errors for gaining a complete understanding of the remotely-sensed AGB prediction errors.

  16. Composite Reliability and Standard Errors of Measurement for a Seven-Subtest Short Form of the Wechsler Adult Intelligence Scale-Revised.

    ERIC Educational Resources Information Center

    Schretlen, David; And Others

    1994-01-01

    Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…

  17. Associations between errors and contributing factors in aircraft maintenance

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2003-01-01

    In recent years cognitive error models have provided insights into the unsafe acts that lead to many accidents in safety-critical environments. Most models of accident causation are based on the notion that human errors occur in the context of contributing factors. However, there is a lack of published information on possible links between specific errors and contributing factors. A total of 619 safety occurrences involving aircraft maintenance were reported using a self-completed questionnaire. Of these occurrences, 96% were related to the actions of maintenance personnel. The types of errors that were involved, and the contributing factors associated with those actions, were determined. Each type of error was associated with a particular set of contributing factors and with specific occurrence outcomes. Among the associations were links between memory lapses and fatigue and between rule violations and time pressure. Potential applications of this research include assisting with the design of accident prevention strategies, the estimation of human error probabilities, and the monitoring of organizational safety performance.

  18. Analysis of a Split-Plot Experimental Design Applied to a Low-Speed Wind Tunnel Investigation

    NASA Technical Reports Server (NTRS)

    Erickson, Gary E.

    2013-01-01

    A procedure to analyze a split-plot experimental design featuring two input factors, two levels of randomization, and two error structures in a low-speed wind tunnel investigation of a small-scale model of a fighter airplane configuration is described in this report. Standard commercially-available statistical software was used to analyze the test results obtained in a randomization-restricted environment often encountered in wind tunnel testing. The input factors were differential horizontal stabilizer incidence and the angle of attack. The response variables were the aerodynamic coefficients of lift, drag, and pitching moment. Using split-plot terminology, the whole plot, or difficult-to-change, factor was the differential horizontal stabilizer incidence, and the subplot, or easy-to-change, factor was the angle of attack. The whole plot and subplot factors were both tested at three levels. Degrees of freedom for the whole plot error were provided by replication in the form of three blocks, or replicates, which were intended to simulate three consecutive days of wind tunnel facility operation. The analysis was conducted in three stages, which yielded the estimated mean squares, multiple regression function coefficients, and corresponding tests of significance for all individual terms at the whole plot and subplot levels for the three aerodynamic response variables. The estimated regression functions included main effects and two-factor interaction for the lift coefficient, main effects, two-factor interaction, and quadratic effects for the drag coefficient, and only main effects for the pitching moment coefficient.

  19. Factorial invariance of pediatric patient self-reported fatigue across age and gender: a multigroup confirmatory factor analysis approach utilizing the PedsQL™ Multidimensional Fatigue Scale.

    PubMed

    Varni, James W; Beaujean, A Alexander; Limbers, Christine A

    2013-11-01

    In order to compare multidimensional fatigue research findings across age and gender subpopulations, it is important to demonstrate measurement invariance, that is, that the items from an instrument have equivalent meaning across the groups studied. This study examined the factorial invariance of the 18-item PedsQL™ Multidimensional Fatigue Scale items across age and gender and tested a bifactor model. Multigroup confirmatory factor analysis (MG-CFA) was performed specifying a three-factor model across three age groups (5-7, 8-12, and 13-18 years) and gender. MG-CFA models were proposed in order to compare the factor structure, metric, scalar, and error variance across age groups and gender. The analyses were based on 837 children and adolescents recruited from general pediatric clinics, subspecialty clinics, and hospitals in which children were being seen for well-child checks, mild acute illness, or chronic illness care. A bifactor model of the items with one general factor influencing all the items and three domain-specific factors representing the General, Sleep/Rest, and Cognitive Fatigue domains fit the data better than oblique factor models. Based on the multiple measures of model fit, configural, metric, and scalar invariance were found for almost all items across the age and gender groups, as was invariance in the factor covariances. The PedsQL™ Multidimensional Fatigue Scale demonstrated strict factorial invariance for child and adolescent self-report across gender and strong factorial invariance across age subpopulations. The findings support an equivalent three-factor structure across the age and gender groups studied. Based on these data, it can be concluded that pediatric patients across the groups interpreted the items in a similar manner regardless of their age or gender, supporting the multidimensional factor structure interpretation of the PedsQL™ Multidimensional Fatigue Scale.

  20. An Empirically Derived Taxonomy of Factors Affecting Physicians' Willingness to Disclose Medical Errors

    PubMed Central

    Kaldjian, Lauris C; Jones, Elizabeth W; Rosenthal, Gary E; Tripp-Reimer, Toni; Hillis, Stephen L

    2006-01-01

    BACKGROUND Physician disclosure of medical errors to institutions, patients, and colleagues is important for patient safety, patient care, and professional education. However, the variables that may facilitate or impede disclosure are diverse and lack conceptual organization. OBJECTIVE To develop an empirically derived, comprehensive taxonomy of factors that affects voluntary disclosure of errors by physicians. DESIGN A mixed-methods study using qualitative data collection (structured literature search and exploratory focus groups), quantitative data transformation (sorting and hierarchical cluster analysis), and validation procedures (confirmatory focus groups and expert review). RESULTS Full-text review of 316 articles identified 91 impeding or facilitating factors affecting physicians' willingness to disclose errors. Exploratory focus groups identified an additional 27 factors. Sorting and hierarchical cluster analysis organized factors into 8 domains. Confirmatory focus groups and expert review relocated 6 factors, removed 2 factors, and modified 4 domain names. The final taxonomy contained 4 domains of facilitating factors (responsibility to patient, responsibility to self, responsibility to profession, responsibility to community), and 4 domains of impeding factors (attitudinal barriers, uncertainties, helplessness, fears and anxieties). CONCLUSIONS A taxonomy of facilitating and impeding factors provides a conceptual framework for a complex field of variables that affects physicians' willingness to disclose errors to institutions, patients, and colleagues. This taxonomy can be used to guide the design of studies to measure the impact of different factors on disclosure, to assist in the design of error-reporting systems, and to inform educational interventions to promote the disclosure of errors to patients. PMID:16918739

  1. Drought Persistence Errors in Global Climate Models

    NASA Astrophysics Data System (ADS)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  2. Psychometric Properties of the Procrastination Assessment Scale-Student (PASS) in a Student Sample of Sabzevar University of Medical Sciences.

    PubMed

    Mortazavi, Forough; Mortazavi, Saideh S; Khosrorad, Razieh

    2015-09-01

    Procrastination is a common behavior which affects different aspects of life. The procrastination assessment scale-student (PASS) evaluates academic procrastination apropos its frequency and reasons. The aims of the present study were to translate, culturally adapt, and validate the Farsi version of the PASS in a sample of Iranian medical students. In this cross-sectional study, the PASS was translated into Farsi through the forward-backward method, and its content validity was thereafter assessed by a panel of 10 experts. The Farsi version of the PASS was subsequently distributed among 423 medical students. The internal reliability of the PASS was assessed using Cronbach's alpha. An exploratory factor analysis (EFA) was conducted on 18 items and then 28 items of the scale to find new models. The construct validity of the scale was assessed using both EFA and confirmatory factor analysis. The predictive validity of the scale was evaluated by calculating the correlation between the academic procrastination scores and the students' average scores in the previous semester. The corresponding reliability of the first and second parts of the scale was 0.781 and 0.861. An EFA on 18 items of the scale found 4 factors which jointly explained 53.2% of variances: The model was marginally acceptable (root mean square error of approximation [RMSEA] =0.098, standardized root mean square residual [SRMR] =0.076, χ(2) /df =4.8, comparative fit index [CFI] =0.83). An EFA on 28 items of the scale found 4 factors which altogether explained 42.62% of variances: The model was acceptable (RMSEA =0.07, SRMR =0.07, χ(2)/df =2.8, incremental fit index =0.90, CFI =0.90). There was a negative correlation between the procrastination scores and the students' average scores (r = -0.131, P =0.02). The Farsi version of the PASS is a valid and reliable tool to measure academic procrastination in Iranian undergraduate medical students.

  3. Tracking and shape errors measurement of concentrating heliostats

    NASA Astrophysics Data System (ADS)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-09-01

    In solar tower power plants, factors such as tracking accuracy, facets misalignment and surface shape errors of concentrating heliostats are of prime importance on the efficiency of the system. At industrial scale, one critical issue is the time and effort required to adjust the different mirrors of the faceted heliostats, which could take several months using current techniques. Thus, methods enabling quick adjustment of a field with a huge number of heliostats are essential for the rise of solar tower technology. In this communication is described a new method for heliostat characterization that makes use of four cameras located near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. From knowledge of a measured sun profile, data processing of the acquired images allows reconstructing the slope and shape errors of the heliostats, including tracking and canting errors. The mathematical basis of this shape reconstruction process is explained comprehensively. Numerical simulations demonstrate that the measurement accuracy of this "backward-gazing method" is compliant with the requirements of solar concentrating optics. Finally, we present our first experimental results obtained at the THEMIS experimental solar tower plant in Targasonne, France.

  4. Cross-cultural adaptation and validation of the Turkish version of the pain catastrophizing scale among patients with ankylosing spondylitis

    PubMed Central

    İlçin, Nursen; Gürpınar, Barış; Bayraktar, Deniz; Savcı, Sema; Çetin, Pınar; Sarı, İsmail; Akkoç, Nurullah

    2016-01-01

    [Purpose] This study describes the cultural adaptation, validation, and reliability of the Turkish version of the Pain Catastrophizing Scale in patients with ankylosing spondylitis. [Methods] The validity of the Turkish version of the Pain Catastrophizing Scale was assessed by evaluating data quality (missing data and floor and ceiling effects), principal components analysis, internal consistency (Cronbach’s alpha), and construct validity (Spearman’s rho). Reproducibility analyses included standard measurement error, minimum detectable change, limits of agreement, and intraclass correlation coefficients. [Results] Sixty-four adult patients with ankylosing spondylitis with a mean age of 42.2 years completed the study. Factor analysis revealed that all questionnaire items could be grouped into two factors. Excellent internal consistency was found, with a Chronbach’s alpha value of 0.95. Reliability analyses showed an intraclass correlation coefficient (95% confidence interval) of 0.96 for the total score. There was a low correlation coefficient between the Turkish version of the Pain Catastrophizing Scale and body mass index, pain levels at rest and during activity, health-related quality of life, and fear and avoidance behaviors. [Conclusion] The results of this study indicate that the Turkish version of the Pain Catastrophizing Scale is a valid and reliable clinical and research tool for patients with ankylosing spondylitis. PMID:26957778

  5. The struggle with employee engagement: Measures and construct clarification using five samples.

    PubMed

    Byrne, Zinta S; Peters, Janet M; Weston, James W

    2016-09-01

    Among scholarly researchers, the Utrecht Work Engagement Scale (UWES) is a popular scale for assessing employee or work engagement. However, challenges to the scale's validity have raised major concerns about the measurement and conceptualization of engagement as a construct. Across 4 field samples, we examined 2 measures of engagement, the UWES and the Job Engagement Scale (JES), in both factor structure and patterns of relationships with theoretically hypothesized antecedents and consequences. In a fifth field sample, we examined the construct-level relationships between engagement and related variables, while controlling for sources of measurement error (i.e., item-specific factor, scale-specific factor, random response, and transient). By examining 2 measures, each derived from different theoretical bases, we provide unique insight into the measurement and construct of engagement. Our results show that, although correlated, the JES and UWES are not interchangeable. The UWES, more so than the JES, assesses engagement with overlap from other job attitudes, requiring improvement in the measurement of engagement. We offer guidance as to when to use each measure. Furthermore, by isolating the construct versus measurement of engagement relative to burnout, commitment, stress, and psychological meaningfulness and availability, we determined (a) the engagement construct is not the same as the opposite of burnout, warranting a reevaluation of the opposite-of-burnout conceptualization of engagement; and (b) psychological meaningfulness and engagement are highly correlated and likely reciprocally related, necessitating a modification to the self-role-expression conceptualization of engagement. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Evaluation of depressive symptoms in patients with coronary artery disease using the Montgomery Åsberg Depression Rating Scale.

    PubMed

    Bunevicius, Adomas; Staniute, Margarita; Brozaitiene, Julija; Pommer, Antoinette M; Pop, Victor J M; Montgomery, Stuart A; Bunevicius, Robertas

    2012-09-01

    The aim of this study was to evaluate, in patients with coronary artery disease (CAD), factor structure and psychometric properties of the Montgomery Åsberg Depression Rating Scale (MADRS) to identify patients with current major depressive episode (MDE). The construct validity of the MADRS against self-rating scales was also evaluated. Consecutive 522 CAD patients at admission to the cardiac rehabilitation program were interviewed for the severity of depressive symptoms using the MADRS and for current MDE using the structured MINI International Neuropsychiatric Interview. Also, all patients completed the Hospital Anxiety and Depression Scale and the Beck Depression Inventory-II. The MADRS had one-factor structure and high internal consistency (Cronbach's coefficient α=0.82). Confirmative factor analysis indicated an adequate fit: comparative fit index=0.95, normed fit index=0.91, and root mean square error of approximation=0.07. At a cut-off value of 10 or higher, the MADRS had good psychometric properties for the identification of current MDE (positive predictive value=42%, with sensitivity=88% and specificity=85%). There was also a moderate to strong correlation of MADRS scores with scores on self-rating depression scales. In sum, in CAD patients undergoing rehabilitation, the MADRS is a unidimensional instrument with high internal consistency and can be used for the identification of depressed CAD patients. The association between MADRS and self-rating depression scores is moderate to strong. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.

  7. Towards the 1 mm/y stability of the radial orbit error at regional scales

    NASA Astrophysics Data System (ADS)

    Couhert, Alexandre; Cerri, Luca; Legeais, Jean-François; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel

    2015-01-01

    An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West “order-1” pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.

  8. Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales

    NASA Technical Reports Server (NTRS)

    Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel

    2015-01-01

    An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS, SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.

  9. Towards the 1 mm/y Stability of the Radial Orbit Error at Regional Scales

    NASA Technical Reports Server (NTRS)

    Couhert, Alexandre; Cerri, Luca; Legeais, Jean-Francois; Ablain, Michael; Zelensky, Nikita P.; Haines, Bruce J.; Lemoine, Frank G.; Bertiger, William I.; Desai, Shailen D.; Otten, Michiel

    2014-01-01

    An estimated orbit error budget for the Jason-1 and Jason-2 GDR-D solutions is constructed, using several measures of orbit error. The focus is on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular this study reviews orbit errors dependent on the tracking technique, with an aim to monitoring the long-term stability of all available tracking systems operating on Jason-1 and Jason-2 (GPS, DORIS,SLR). The reference frame accuracy and its effect on Jason orbit is assessed. We also examine the impact of analysis method on the inference of Geographically Correlated Errors as well as the significance of estimated radial orbit error trends versus the time span of the analysis. Thus a long-term error budget of the 10-year Jason-1 and Envisat GDR-D orbit time series is provided for two time scales: interannual and decadal. As the temporal variations of the geopotential remain one of the primary limitations in the Precision Orbit Determination modeling, the overall accuracy of the Jason-1 and Jason-2 GDR-D solutions is evaluated through comparison with external orbits based on different time-variable gravity models. This contribution is limited to an East-West "order-1" pattern at the 2 mm/y level (secular) and 4 mm level (seasonal), over the Jason-2 lifetime. The possibility of achieving sub-mm/y radial orbit stability over interannual and decadal periods at regional scales and the challenge of evaluating such an improvement using in situ independent data is discussed.

  10. Global distortion of GPS networks associated with satellite antenna model errors

    NASA Astrophysics Data System (ADS)

    Cardellach, E.; Elósegui, P.; Davis, J. L.

    2007-07-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.

  11. Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors

    NASA Technical Reports Server (NTRS)

    Cardellach, E.; Elosequi, P.; Davis, J. L.

    2007-01-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.

  12. Error disclosure: a new domain for safety culture assessment.

    PubMed

    Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J

    2012-07-01

    To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.

  13. Differences among Job Positions Related to Communication Errors at Construction Sites

    NASA Astrophysics Data System (ADS)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  14. A national physician survey of diagnostic error in paediatrics.

    PubMed

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  15. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    PubMed

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate access to guidelines or unclear organisational routines". Medication errors regarded as malpractice in Sweden were of the same character as medication errors worldwide. A complex interplay between individual and system factors often contributed to the errors.

  16. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations

    PubMed Central

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028

  17. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.

    PubMed

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  19. Evidence of structural invariance across three groups of Meehlian schizotypes

    PubMed Central

    Chan, Raymond CK; Gooding, Diane C; Shi, Hai-song; Geng, Fu-lei; Xie, Dong-jie; Yang, Zhuo-Ya; Liu, Wen-hua; Wang, Yi; Yan, Chao; Shi, Chuan; Lui, Simon SY; Cheung, Eric FC

    2016-01-01

    According to Meehl’s model of schizotypy, there is a latent personality organization associated with the diathesis for schizophrenia that can be identified in several ways. We sought to examine the structural invariance of four Chapman psychosis–proneness scales (CPPS) across three groups of putative schizotypes, namely, clinically-, biologically-, and psychometrically-identified schizotypes. We examined the factor structure of the Perceptual Aberration (PER), Magical Ideation (MIS), Revised Social Anhedonia (RSAS), and Revised Physical Anhedonia (RPAS) scales in 196 schizophrenia patients, 197 non-psychotic first-degree relatives, and 1,724 non-clinical young adults. The confirmatory factor analyses indicated that the best-fitting model was one in which there is a two-factor model with negative schizotypy (RSAS and RPAS) and positive schizotypy (PER and MIS). All three samples fit the model well, with Comparative Fit Indices>0.95 and Tucker Lewis Indices>0.90. The root mean square error of approximations were all small (P values⩽0.01). We also observed that for both anhedonia scales, the groups’ mean scale scores varied in the hypothesized direction, as predicted by Meehl’s model of schizotypy. All three Chinese samples, namely, the patients (clinical schizotypes), relatives (biologically-identified schizotypes), and non-clinical young adults (containing psychometrically-identified schizotypes) showed the same factorial structure. This finding supports the suitability of the CPPS for cross-cultural and/or genetic investigations of schizotypy. PMID:27336057

  20. Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment.

    PubMed

    Liberali, Jordana M; Reyna, Valerie F; Furlan, Sarah; Stein, Lilian M; Pardo, Seth T

    2012-10-01

    Despite evidence that individual differences in numeracy affect judgment and decision making, the precise mechanisms underlying how such differences produce biases and fallacies remain unclear. Numeracy scales have been developed without sufficient theoretical grounding, and their relation to other cognitive tasks that assess numerical reasoning, such as the Cognitive Reflection Test (CRT), has been debated. In studies conducted in Brazil and in the USA, we administered an objective Numeracy Scale (NS), Subjective Numeracy Scale (SNS), and the CRT to assess whether they measured similar constructs. The Rational-Experiential Inventory, inhibition (go/no-go task), and intelligence were also investigated. By examining factor solutions along with frequent errors for questions that loaded on each factor, we characterized different types of processing captured by different items on these scales. We also tested the predictive power of these factors to account for biases and fallacies in probability judgments. In the first study, 259 Brazilian undergraduates were tested on the conjunction and disjunction fallacies. In the second study, 190 American undergraduates responded to a ratio-bias task. Across the different samples, the results were remarkably similar. The results indicated that the CRT is not just another numeracy scale, that objective and subjective numeracy scales do not measure an identical construct, and that different aspects of numeracy predict different biases and fallacies. Dimensions of numeracy included computational skills such as multiplying, proportional reasoning, mindless or verbatim matching, metacognitive monitoring, and understanding the gist of relative magnitude, consistent with dual-process theories such as fuzzy-trace theory.

  1. Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment

    PubMed Central

    LIBERALI, JORDANA M.; REYNA, VALERIE F.; FURLAN, SARAH; STEIN, LILIAN M.; PARDO, SETH T.

    2013-01-01

    Despite evidence that individual differences in numeracy affect judgment and decision making, the precise mechanisms underlying how such differences produce biases and fallacies remain unclear. Numeracy scales have been developed without sufficient theoretical grounding, and their relation to other cognitive tasks that assess numerical reasoning, such as the Cognitive Reflection Test (CRT), has been debated. In studies conducted in Brazil and in the USA, we administered an objective Numeracy Scale (NS), Subjective Numeracy Scale (SNS), and the CRT to assess whether they measured similar constructs. The Rational–Experiential Inventory, inhibition (go/no-go task), and intelligence were also investigated. By examining factor solutions along with frequent errors for questions that loaded on each factor, we characterized different types of processing captured by different items on these scales. We also tested the predictive power of these factors to account for biases and fallacies in probability judgments. In the first study, 259 Brazilian undergraduates were tested on the conjunction and disjunction fallacies. In the second study, 190 American undergraduates responded to a ratio-bias task. Across the different samples, the results were remarkably similar. The results indicated that the CRT is not just another numeracy scale, that objective and subjective numeracy scales do not measure an identical construct, and that different aspects of numeracy predict different biases and fallacies. Dimensions of numeracy included computational skills such as multiplying, proportional reasoning, mindless or verbatim matching, metacognitive monitoring, and understanding the gist of relative magnitude, consistent with dual-process theories such as fuzzy-trace theory. PMID:23878413

  2. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    NASA Astrophysics Data System (ADS)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  3. Implementation and verification of a four-probe motion error measurement system for a large-scale roll lathe used in hybrid manufacturing

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Liu; Niu, Zengyuan; Matsuura, Daiki; Lee, Jung Chul; Shimizu, Yuki; Gao, Wei; Oh, Jeong Seok; Park, Chun Hong

    2017-10-01

    In this paper, a four-probe measurement system is implemented and verified for the carriage slide motion error measurement of a large-scale roll lathe used in hybrid manufacturing where a laser machining probe and a diamond cutting tool are placed on two sides of a roll workpiece for manufacturing. The motion error of the carriage slide of the roll lathe is composed of two straightness motion error components and two parallelism motion error components in the vertical and horizontal planes. Four displacement measurement probes, which are mounted on the carriage slide with respect to four opposing sides of the roll workpiece, are employed for the measurement. Firstly, based on the reversal technique, the four probes are moved by the carriage slide to scan the roll workpiece before and after a 180-degree rotation of the roll workpiece. Taking into consideration the fact that the machining accuracy of the lathe is influenced by not only the carriage slide motion error but also the gravity deformation of the large-scale roll workpiece due to its heavy weight, the vertical motion error is thus characterized relating to the deformed axis of the roll workpiece. The horizontal straightness motion error can also be synchronously obtained based on the reversal technique. In addition, based on an error separation algorithm, the vertical and horizontal parallelism motion error components are identified by scanning the rotating roll workpiece at the start and the end positions of the carriage slide, respectively. The feasibility and reliability of the proposed motion error measurement system are demonstrated by the experimental results and the measurement uncertainty analysis.

  4. Refinement and evaluation of the Massachusetts firm-yield estimator model version 2.0

    USGS Publications Warehouse

    Levin, Sara B.; Archfield, Stacey A.; Massey, Andrew J.

    2011-01-01

    The firm yield is the maximum average daily withdrawal that can be extracted from a reservoir without risk of failure during an extended drought period. Previously developed procedures for determining the firm yield of a reservoir were refined and applied to 38 reservoir systems in Massachusetts, including 25 single- and multiple-reservoir systems that were examined during previous studies and 13 additional reservoir systems. Changes to the firm-yield model include refinements to the simulation methods and input data, as well as the addition of several scenario-testing capabilities. The simulation procedure was adapted to run at a daily time step over a 44-year simulation period, and daily streamflow and meteorological data were compiled for all the reservoirs for input to the model. Another change to the model-simulation methods is the adjustment of the scaling factor used in estimating groundwater contributions to the reservoir. The scaling factor is used to convert the daily groundwater-flow rate into a volume by multiplying the rate by the length of reservoir shoreline that is hydrologically connected to the aquifer. Previous firm-yield analyses used a constant scaling factor that was estimated from the reservoir surface area at full pool. The use of a constant scaling factor caused groundwater flows during periods when the reservoir stage was very low to be overestimated. The constant groundwater scaling factor used in previous analyses was replaced with a variable scaling factor that is based on daily reservoir stage. This change reduced instability in the groundwater-flow algorithms and produced more realistic groundwater-flow contributions during periods of low storage. Uncertainty in the firm-yield model arises from many sources, including errors in input data. The sensitivity of the model to uncertainty in streamflow input data and uncertainty in the stage-storage relation was examined. A series of Monte Carlo simulations were performed on 22 reservoirs to assess the sensitivity of firm-yield estimates to errors in daily-streamflow input data. Results of the Monte Carlo simulations indicate that underestimation in the lowest stream inflows can cause firm yields to be underestimated by an average of 1 to 10 percent. Errors in the stage-storage relation can arise when the point density of bathymetric survey measurements is too low. Existing bathymetric surfaces were resampled using hypothetical transects of varying patterns and point densities in order to quantify the uncertainty in stage-storage relations. Reservoir-volume calculations and resulting firm yields were accurate to within 5 percent when point densities were greater than 20 points per acre of reservoir surface. Methods for incorporating summer water-demand-reduction scenarios into the firm-yield model were developed as well as the ability to relax the no-fail reliability criterion. Although the original firm-yield model allowed monthly reservoir releases to be specified, there have been no previous studies examining the feasibility of controlled releases for downstream flows from Massachusetts reservoirs. Two controlled-release scenarios were tested—with and without a summer water-demand-reduction scenario—for a scenario with a no-fail criterion and a scenario that allows for a 1-percent failure rate over the entire simulation period. Based on these scenarios, about one-third of the reservoir systems were able to support the flow-release scenarios at their 2000–2004 usage rates. Reservoirs with higher storage ratios (reservoir storage capacity to mean annual streamflow) and lower demand ratios (mean annual water demand to annual firm yield) were capable of higher downstream release rates. For the purposes of this research, all reservoir systems were assumed to have structures which enable controlled releases, although this assumption may not be true for many of the reservoirs studied.

  5. Density scaling on n  =  1 error field penetration in ohmically heated discharges in EAST

    NASA Astrophysics Data System (ADS)

    Wang, Hui-Hui; Sun, You-Wen; Shi, Tong-Hui; Zang, Qing; Liu, Yue-Qiang; Yang, Xu; Gu, Shuai; He, Kai-Yang; Gu, Xiang; Qian, Jin-Ping; Shen, Biao; Luo, Zheng-Ping; Chu, Nan; Jia, Man-Ni; Sheng, Zhi-Cai; Liu, Hai-Qing; Gong, Xian-Zu; Wan, Bao-Nian; Contributors, EAST

    2018-05-01

    Density scaling of error field penetration in EAST is investigated with different n  =  1 magnetic perturbation coil configurations in ohmically heated discharges. The density scalings of error field penetration thresholds under two magnetic perturbation spectra are br\\propto n_e0.5 and br\\propto n_e0.6 , where b r is the error field and n e is the line averaged electron density. One difficulty in understanding the density scaling is that key parameters other than density in determining the field penetration process may also be changed when the plasma density changes. Therefore, they should be determined from experiments. The estimated theoretical analysis (br\\propto n_e0.54 in lower density region and br\\propto n_e0.40 in higher density region), using the density dependence of viscosity diffusion time, electron temperature and mode frequency measured from the experiments, is consistent with the observed scaling. One of the key points to reproduce the observed scaling in EAST is that the viscosity diffusion time estimated from energy confinement time is almost constant. It means that the plasma confinement lies in saturation ohmic confinement regime rather than the linear Neo-Alcator regime causing weak density dependence in the previous theoretical studies.

  6. Stabilizing Conditional Standard Errors of Measurement in Scale Score Transformations

    ERIC Educational Resources Information Center

    Moses, Tim; Kim, YoungKoung

    2017-01-01

    The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…

  7. Evaluation and error apportionment of an ensemble of atmospheric chemistry transport modeling systems: multivariable temporal and spatial breakdown

    EPA Science Inventory

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  8. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

  9. Foul WX Underground: The Dynamics of Resistance and the Analog Logic of Communication during a Digital Blackout

    DTIC Science & Technology

    2009-05-21

    32 Sigmund Freud , Civilization and Its Discontents (New York: W.W. Norton, 1962) and Abraham Maslow, ―Theory of Human...Recognizing and Avoiding Error in Complex Situations. New York: Basic Books, 1996. Freud , Sigmund . Civilization and Its Discontents. New York: W.W...better than a one-to-one scale map? Return to the factor of change over time and even a stable model of human behavior, whether from Freud or Maslow

  10. Non-GPS Navigation Using Vision-Aiding and Active Radio Range Measurements

    DTIC Science & Technology

    2011-03-01

    describes unbal - anced compliance of the gyroscope’s float assembly along the input and spin axes. The remaining three error sources are the scale- factor ...Department of Defense, or the United States Government. This material is declared a work of the U.S. Government and is not subject to copyright... work . You are my Rock. Next, I’d like to thank my advisor for challenging me and giving me the oppor- tunity to do my best. I would also like to

  11. Characterization and limits of a cold-atom Sagnac interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauguet, A.; Canuel, B.; Leveque, T.

    2009-12-15

    We present the full evaluation of a cold-atom gyroscope based on atom interferometry. We have performed extensive studies to determine the systematic errors, scale factor and sensitivity. We demonstrate that the acceleration noise can be efficiently removed from the rotation signal, allowing us to reach the fundamental limit of the quantum projection noise for short term measurements. The technical limits to the long term sensitivity and accuracy have been identified, clearing the way for the next generation of ultrasensitive atom gyroscopes.

  12. Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization

    NASA Astrophysics Data System (ADS)

    More, Sushant N.

    New insights into the inter-nucleon interactions, developments in many-body technology, and the surge in computational capabilities has led to phenomenal progress in low-energy nuclear physics in the past few years. Nonetheless, many calculations still lack a robust uncertainty quantification which is essential for making reliable predictions. In this work we investigate two distinct sources of uncertainty and develop ways to account for them. Harmonic oscillator basis expansions are widely used in ab-initio nuclear structure calculations. Finite computational resources usually require that the basis be truncated before observables are fully converged, necessitating reliable extrapolation schemes. It has been demonstrated recently that errors introduced from basis truncation can be taken into account by focusing on the infrared and ultraviolet cutoffs induced by a truncated basis. We show that a finite oscillator basis effectively imposes a hard-wall boundary condition in coordinate space. We accurately determine the position of the hard-wall as a function of oscillator space parameters, derive infrared extrapolation formulas for the energy and other observables, and discuss the extension of this approach to higher angular momentum and to other localized bases. We exploit the duality of the harmonic oscillator to account for the errors introduced by a finite ultraviolet cutoff. Nucleon knockout reactions have been widely used to study and understand nuclear properties. Such an analysis implicitly assumes that the effects of the probe can be separated from the physics of the target nucleus. This factorization between nuclear structure and reaction components depends on the renormalization scale and scheme, and has not been well understood. But it is potentially critical for interpreting experiments and for extracting process-independent nuclear properties. We use a class of unitary transformations called the similarity renormalization group (SRG) transformations to systematically study the scale dependence of factorization for the simplest knockout process of deuteron electrodisintegration. We find that the extent of scale dependence depends strongly on kinematics, but in a systematic way. We find a relatively weak scale dependence at the quasi-free kinematics that gets progressively stronger as one moves away from the quasi-free region. Based on examination of the relevant overlap matrix elements, we are able to qualitatively explain and even predict the nature of scale dependence based on the kinematics under consideration.

  13. Measuring positive mental health in Canada: construct validation of the Mental Health Continuum—Short Form

    PubMed Central

    Heather, Orpana; Julie, Vachon; Jennifer, Dykxhoorn; Gayatri, Jayaraman

    2017-01-01

    Abstract Introduction: Positive mental health is increasingly recognized as an important focus for public health policies and programs. In Canada, the Mental Health Continuum— Short Form (MHC-SF) was identified as a promising measure to include on population surveys to measure positive mental health. It proposes to measure a three-factor model of positive mental health including emotional, social and psychological well-being. The purpose of this study was to examine whether the MHC-SF is an adequate measure of positive mental health for Canadian adults. Methods: We conducted confirmatory factor analysis (CFA) using data from the 2012 Canadian Community Health Survey (CCHS)—Mental Health Component (CCHS-MH), and cross-validated the model using data from the CCHS 2011–2012 annual cycle. We examined criterion-related validity through correlations of MHC-SF subscale scores with positively and negatively associated concepts (e.g. life satisfaction and psychological distress, respectively). Results: We confirmed the validity of the three-factor model of emotional, social and psychological well-being through CFA on two independent samples, once four correlated errors between items on the social well-being scale were added. We observed significant correlations in the anticipated direction between emotional, psychological and social well-being scores and related concepts. Cronbach’s alpha for both emotional and psychological well-being subscales was 0.82; for social well-being it was 0.77. Conclusion: Our study suggests that the MHC-SF measures a three-factor model of positive mental health in the Canadian population. However, caution is warranted when using the social well-being scale, which did not function as well as the other factors, as evidenced by the need to add several correlated error terms to obtain adequate model fit, a higher level of missing data on these questions and weaker correlations with related constructs. Social well-being is important in a comprehensive measure of positive mental health, and further research is recommended. PMID:28402801

  14. Measuring positive mental health in Canada: construct validation of the Mental Health Continuum-Short Form.

    PubMed

    Orpana, Heather; Vachon, Julie; Dykxhoorn, Jennifer; Jayaraman, Gayatri

    2017-04-01

    Positive mental health is increasingly recognized as an important focus for public health policies and programs. In Canada, the Mental Health Continuum-Short Form (MHC-SF) was identified as a promising measure to include on population surveys to measure positive mental health. It proposes to measure a three-factor model of positive mental health including emotional, social and psychological well-being. The purpose of this study was to examine whether the MHC-SF is an adequate measure of positive mental health for Canadian adults. We conducted confirmatory factor analysis (CFA) using data from the 2012 Canadian Community Health Survey (CCHS)-Mental Health Component (CCHS-MH), and cross-validated the model using data from the CCHS 2011-2012 annual cycle. We examined criterion-related validity through correlations of MHC-SF subscale scores with positively and negatively associated concepts (e.g. life satisfaction and psychological distress, respectively). We confirmed the validity of the three-factor model of emotional, social and psychological well-being through CFA on two independent samples, once four correlated errors between items on the social well-being scale were added. We observed significant correlations in the anticipated direction between emotional, psychological and social well-being scores and related concepts. Cronbach's alpha for both emotional and psychological well-being subscales was 0.82; for social well-being it was 0.77. Our study suggests that the MHC-SF measures a three-factor model of positive mental health in the Canadian population. However, caution is warranted when using the social well-being scale, which did not function as well as the other factors, as evidenced by the need to add several correlated error terms to obtain adequate model fit, a higher level of missing data on these questions and weaker correlations with related constructs. Social well-being is important in a comprehensive measure of positive mental health, and further research is recommended.

  15. Effect of thematic map misclassification on landscape multi-metric assessment.

    PubMed

    Kleindl, William J; Powell, Scott L; Hauer, F Richard

    2015-06-01

    Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

  16. Error-Analysis for Correctness, Effectiveness, and Composing Procedure.

    ERIC Educational Resources Information Center

    Ewald, Helen Rothschild

    The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…

  17. Multiscale measurement error models for aggregated small area health data.

    PubMed

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.

  18. Effects of learning climate and registered nurse staffing on medication errors.

    PubMed

    Chang, Yunkyung; Mark, Barbara

    2011-01-01

    Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.

  19. Factors affecting nursing students' intention to report medication errors: An application of the theory of planned behavior.

    PubMed

    Ben Natan, Merav; Sharon, Ira; Mahajna, Marlen; Mahajna, Sara

    2017-11-01

    Medication errors are common among nursing students. Nonetheless, these errors are often underreported. To examine factors related to nursing students' intention to report medication errors, using the Theory of Planned Behavior, and to examine whether the theory is useful in predicting students' intention to report errors. This study has a descriptive cross-sectional design. Study population was recruited in a university and a large nursing school in central and northern Israel. A convenience sample of 250 nursing students took part in the study. The students completed a self-report questionnaire, based on the Theory of Planned Behavior. The findings indicate that students' intention to report medication errors was high. The Theory of Planned Behavior constructs explained 38% of variance in students' intention to report medication errors. The constructs of behavioral beliefs, subjective norms, and perceived behavioral control were found as affecting this intention, while the most significant factor was behavioral beliefs. The findings also reveal that students' fear of the reaction to disclosure of the error from superiors and colleagues may impede them from reporting the error. Understanding factors related to reporting medication errors is crucial to designing interventions that foster error reporting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Accuracy of free energies of hydration using CM1 and CM3 atomic charges.

    PubMed

    Udier-Blagović, Marina; Morales De Tirado, Patricia; Pearlman, Shoshannah A; Jorgensen, William L

    2004-08-01

    Absolute free energies of hydration (DeltaGhyd) have been computed for 25 diverse organic molecules using partial atomic charges derived from AM1 and PM3 wave functions via the CM1 and CM3 procedures of Cramer, Truhlar, and coworkers. Comparisons are made with results using charges fit to the electrostatic potential surface (EPS) from ab initio 6-31G* wave functions and from the OPLS-AA force field. OPLS Lennard-Jones parameters for the organic molecules were used together with the TIP4P water model in Monte Carlo simulations with free energy perturbation theory. Absolute free energies of hydration were computed for OPLS united-atom and all-atom methane by annihilating the solutes in water and in the gas phase, and absolute DeltaGhyd values for all other molecules were computed via transformation to one of these references. Optimal charge scaling factors were determined by minimizing the unsigned average error between experimental and calculated hydration free energies. The PM3-based charge models do not lead to lower average errors than obtained with the EPS charges for the subset of 13 molecules in the original study. However, improvement is obtained by scaling the CM1A partial charges by 1.14 and the CM3A charges by 1.15, which leads to average errors of 1.0 and 1.1 kcal/mol for the full set of 25 molecules. The scaled CM1A charges also yield the best results for the hydration of amides including the E/Z free-energy difference for N-methylacetamide in water. Copyright 2004 Wiley Periodicals, Inc.

  1. Sunyaev-Zel'dovich Effect and X-ray Scaling Relations from Weak-Lensing Mass Calibration of 32 SPT Selected Galaxy Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietrich, J.P.; et al.

    Uncertainty in the mass-observable scaling relations is currently the limiting factor for galaxy cluster based cosmology. Weak gravitational lensing can provide a direct mass calibration and reduce the mass uncertainty. We present new ground-based weak lensing observations of 19 South Pole Telescope (SPT) selected clusters and combine them with previously reported space-based observations of 13 galaxy clusters to constrain the cluster mass scaling relations with the Sunyaev-Zel'dovich effect (SZE), the cluster gas massmore » $$M_\\mathrm{gas}$$, and $$Y_\\mathrm{X}$$, the product of $$M_\\mathrm{gas}$$ and X-ray temperature. We extend a previously used framework for the analysis of scaling relations and cosmological constraints obtained from SPT-selected clusters to make use of weak lensing information. We introduce a new approach to estimate the effective average redshift distribution of background galaxies and quantify a number of systematic errors affecting the weak lensing modelling. These errors include a calibration of the bias incurred by fitting a Navarro-Frenk-White profile to the reduced shear using $N$-body simulations. We blind the analysis to avoid confirmation bias. We are able to limit the systematic uncertainties to 6.4% in cluster mass (68% confidence). Our constraints on the mass-X-ray observable scaling relations parameters are consistent with those obtained by earlier studies, and our constraints for the mass-SZE scaling relation are consistent with the the simulation-based prior used in the most recent SPT-SZ cosmology analysis. We can now replace the external mass calibration priors used in previous SPT-SZ cosmology studies with a direct, internal calibration obtained on the same clusters.« less

  2. Independent validation of the scales for outcomes in Parkinson's disease-autonomic (SCOPA-AUT).

    PubMed

    Rodriguez-Blazquez, C; Forjaz, M J; Frades-Payo, B; de Pedro-Cuesta, J; Martinez-Martin, P

    2010-02-01

    Autonomic dysfunction is common in Parkinson's disease (PD) and causes a great impact in health-related quality of life (HRQL) and functional status of patients. This study is the first independent validation of the Scales for Outcomes in PD-Autonomic (SCOPA-AUT). In an observational, cross-sectional study (ELEP Study), 387 PD patients were assessed using, in addition to the SCOPA-AUT, the Hoehn and Yahr staging, SCOPA-Motor, SCOPA-Cognition, Cumulative Illness Rating Scale-Geriatrics, modified Parkinson Psychosis Rating Scale, Clinical Impression of Severity Index for PD, Hospital Anxiety and Depression Scale, SCOPA-Sleep, SCOPA-Psychosocial, pain and fatigue visual analogue scales, and EQ-5D. SCOPA-AUT acceptability, internal consistency, construct validity, and precision were explored. Data quality was satisfactory (97%). SCOPA-AUT total score did not show floor or ceiling effect, and skewness was 0.40. Cronbach's alpha coefficients ranged from 0.64 (Cardiovascular and Thermorregulatory subscales) to 0.95 (Sexual dysfunction, women). Item homogeneity index was low (0.24) for Gastrointestinal subscale. Factor analysis identified eight factors for men (68% of the variance) and seven factors for women (65% of the variance). SCOPA-AUT correlated at a high level with specific HRQL and functional measures (r(S) = 0.52-0.56). SCOPA-AUT scores were higher for older patients, for more advanced disease, and for patients treated only with levodopa (Kruskal-Wallis test, P < 0.01). Standard error of measurement for SCOPA-AUT subscales was 0.81 (sexual, men) - 2.26 (gastrointestinal). Despite its heterogeneous content, which determines some weaknesses in the psychometric attributes of its subscales, SCOPA-AUT is an acceptable, consistent, valid and precise scale.

  3. Cross-cultural adaptation and measurement properties testing of the Iconographical Falls Efficacy Scale (Icon-FES).

    PubMed

    Franco, Marcia Rodrigues; Pinto, Rafael Zambelli; Delbaere, Kim; Eto, Bianca Yumie; Faria, Maíra Sgobbi; Aoyagi, Giovana Ayumi; Steffens, Daniel; Pastre, Carlos Marcelo

    2018-02-14

    The Iconographical Falls Efficacy Scale (Icon-FES) is an innovative tool to assess concern of falling that uses pictures as visual cues to provide more complete environmental contexts. Advantages of Icon-FES over previous scales include the addition of more demanding balance-related activities, ability to assess concern about falling in highly functioning older people, and its normal distribution. To perform a cross-cultural adaptation and to assess the measurement properties of the 30-item and 10-item Icon-FES in a community-dwelling Brazilian older population. The cross-cultural adaptation followed the recommendations of international guidelines. We evaluated the measurement properties (i.e. internal consistency, test-retest reproducibility, standard error of the measurement, minimal detectable change, construct validity, ceiling/floor effect, data distribution and discriminative validity), in 100 community-dwelling people aged ≥60 years. The 30-item and 10-item Icon-FES-Brazil showed good internal consistency (alpha and omega >0.70) and excellent intra-rater reproducibility (ICC 2,1 =0.96 and 0.93, respectively). According to the standard error of the measurement and minimal detectable change, the magnitude of change needed to exceed the measurement error and variability were 7.2 and 3.4 points for the 30-item and 10-item Icon-FES, respectively. We observed an excellent correlation between both versions of the Icon-FES and Falls Efficacy Scale - International (rho=0.83, p<0.001 [30-item version]; 0.76, p<0.001 [10-item version]). Icon-FES versions showed normal distribution, no floor/ceiling effects and were able to discriminate between groups relating to fall risk factors. Icon-FES-Brazil is a semantically and linguistically appropriate tool with acceptable measurement properties to evaluate concern about falling among the community-dwelling older population. Copyright © 2018 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.

  4. Medication Errors in Vietnamese Hospitals: Prevalence, Potential Outcome and Associated Factors

    PubMed Central

    Nguyen, Huong-Thao; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja

    2015-01-01

    Background Evidence from developed countries showed that medication errors are common and harmful. Little is known about medication errors in resource-restricted settings, including Vietnam. Objectives To determine the prevalence and potential clinical outcome of medication preparation and administration errors, and to identify factors associated with errors. Methods This was a prospective study conducted on six wards in two urban public hospitals in Vietnam. Data of preparation and administration errors of oral and intravenous medications was collected by direct observation, 12 hours per day on 7 consecutive days, on each ward. Multivariable logistic regression was applied to identify factors contributing to errors. Results In total, 2060 out of 5271 doses had at least one error. The error rate was 39.1% (95% confidence interval 37.8%- 40.4%). Experts judged potential clinical outcomes as minor, moderate, and severe in 72 (1.4%), 1806 (34.2%) and 182 (3.5%) doses. Factors associated with errors were drug characteristics (administration route, complexity of preparation, drug class; all p values < 0.001), and administration time (drug round, p = 0.023; day of the week, p = 0.024). Several interactions between these factors were also significant. Nurse experience was not significant. Higher error rates were observed for intravenous medications involving complex preparation procedures and for anti-infective drugs. Slightly lower medication error rates were observed during afternoon rounds compared to other rounds. Conclusions Potentially clinically relevant errors occurred in more than a third of all medications in this large study conducted in a resource-restricted setting. Educational interventions, focusing on intravenous medications with complex preparation procedure, particularly antibiotics, are likely to improve patient safety. PMID:26383873

  5. Scaling fixed-field alternating gradient accelerators with a small orbit excursion.

    PubMed

    Machida, Shinji

    2009-10-16

    A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%.

  6. A global perspective of the limits of prediction skill based on the ECMWF ensemble

    NASA Astrophysics Data System (ADS)

    Zagar, Nedjeljka

    2016-04-01

    In this talk presents a new model of the global forecast error growth applied to the forecast errors simulated by the ensemble prediction system (ENS) of the ECMWF. The proxy for forecast errors is the total spread of the ECMWF operational ensemble forecasts obtained by the decomposition of the wind and geopotential fields in the normal-mode functions. In this way, the ensemble spread can be quantified separately for the balanced and inertio-gravity (IG) modes for every forecast range. Ensemble reliability is defined for the balanced and IG modes comparing the ensemble spread with the control analysis in each scale. The results show that initial uncertainties in the ECMWF ENS are largest in the tropical large-scale modes and their spatial distribution is similar to the distribution of the short-range forecast errors. Initially the ensemble spread grows most in the smallest scales and in the synoptic range of the IG modes but the overall growth is dominated by the increase of spread in balanced modes in synoptic and planetary scales in the midlatitudes. During the forecasts, the distribution of spread in the balanced and IG modes grows towards the climatological spread distribution characteristic of the analyses. The ENS system is found to be somewhat under-dispersive which is associated with the lack of tropical variability, primarily the Kelvin waves. The new model of the forecast error growth has three fitting parameters to parameterize the initial fast growth and a more slow exponential error growth later on. The asymptotic values of forecast errors are independent of the exponential growth rate. It is found that the asymptotic values of the errors due to unbalanced dynamics are around 10 days while the balanced and total errors saturate in 3 to 4 weeks. Reference: Žagar, N., R. Buizza, and J. Tribbia, 2015: A three-dimensional multivariate modal analysis of atmospheric predictability with application to the ECMWF ensemble. J. Atmos. Sci., 72, 4423-4444.

  7. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  8. Statistical Analysis of Instantaneous Frequency Scaling Factor as Derived From Optical Disdrometer Measurements At KQ Bands

    NASA Technical Reports Server (NTRS)

    Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo

    2016-01-01

    The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer. However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size distribution both within a given rain event and across different varieties of rain events. Index Terms-drop size distribution, frequency scaling, propagation losses, radiowave propagation.

  9. Statistical Analysis of Instantaneous Frequency Scaling Factor as Derived From Optical Disdrometer Measurements At KQ Bands

    NASA Technical Reports Server (NTRS)

    Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo

    2016-01-01

    The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer [1]). However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link [2]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size distribution both within a given rain event and across different varieties of rain events. Index Terms-drop size distribution, frequency scaling, propagation losses, radiowave propagation.

  10. Medication errors: an overview for clinicians.

    PubMed

    Wittich, Christopher M; Burkle, Christopher M; Lanier, William L

    2014-08-01

    Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  11. Evaluation of a moderate resolution, satellite-based impervious surface map using an independent, high-resolution validation data set

    USGS Publications Warehouse

    Jones, J.W.; Jarnagin, T.

    2009-01-01

    Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.

  12. Closing the Seasonal Ocean Surface Temperature Balance in the Eastern Tropical Oceans from Remote Sensing and Model Reanalyses

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Clayson, C. A.

    2012-01-01

    Residual forcing necessary to close the MLTB on seasonal time scales are largest in regions of strongest surface heat flux forcing. Identifying the dominant source of error - surface heat flux error, mixed layer depth estimation, ocean dynamical forcing - remains a challenge in the eastern tropical oceans where ocean processes are very active. Improved sub-surface observations are necessary to better constrain errors. 1. Mixed layer depth evolution is critical to the seasonal evolution of mixed layer temperatures. It determines the inertia of the mixed layer, and scales the sensitivity of the MLTB to errors in surface heat flux and ocean dynamical forcing. This role produces timing impacts for errors in SST prediction. 2. Errors in the MLTB are larger than the historical 10Wm-2 target accuracy. In some regions, a larger accuracy can be tolerated if the goal is to resolve the seasonal SST cycle.

  13. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    NASA Astrophysics Data System (ADS)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only modeling of the covariance matrix obtained by evaluating the covariance function at the observation points. We first assumed that this covariance matrix is stationary and that the unresolved scales are not correlated between the observation points, i.e., the matrix is diagonal, and that the values along the diagonal are constant. Tests with these assumptions were unsuccessful, indicating that a more sophisticated model of the covariance is needed for assimilation of data with nonstationary spectrum. A new method for modeling the covariance matrix based on an extended set of modeling assumptions is proposed. First, it is assumed that the covariance matrix is diagonal, that is, that the unresolved scales are not correlated between the observation points. It is postulated that the values on the diagonal depend on a wavenumber that is characteristic for the unresolved part of the spectrum. It is further postulated that this characteristic wavenumber can be diagnosed from the observations and from the estimate of the projection of the state that is being estimated. It is demonstrated that the new method successfully overcomes previously encountered difficulties.

  14. A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding

    NASA Astrophysics Data System (ADS)

    Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui

    2016-02-01

    In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within  ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.

  15. Measuring Moral Distress Among Critical Care Clinicians: Validation and Psychometric Properties of the Italian Moral Distress Scale-Revised.

    PubMed

    Lamiani, Giulia; Setti, Ilaria; Barlascini, Luca; Vegni, Elena; Argentero, Piergiorgio

    2017-03-01

    Moral distress is a common experience among critical care professionals, leading to frustration, withdrawal from patient care, and job abandonment. Most of the studies on moral distress have used the Moral Distress Scale or its revised version (Moral Distress Scale-Revised). However, these scales have never been validated through factor analysis. This article aims to explore the factorial structure of the Moral Distress Scale-Revised and develop a valid and reliable scale through factor analysis. Validation study using a survey design. Eight medical-surgical ICUs in the north of Italy. A total of 184 clinicians (64 physicians, 94 nurses, and 14 residents). The Moral Distress Scale-Revised was translated into Italian and administered along with a measure of depression (Beck Depression Inventory-Second Edition) to establish convergent validity. Exploratory factor analysis was conducted to explore the Moral Distress Scale-Revised factorial structure. Items with low (less than or equal to 0.350) or multiple saturations were removed. The resulting model was tested through confirmatory factor analysis. The Italian Moral Distress Scale-Revised is composed of 14 items referring to four factors: futile care, poor teamwork, deceptive communication, and ethical misconduct. This model accounts for 59% of the total variance and presents a good fit with the data (root mean square error of approximation = 0.06; comparative fit index = 0.95; Tucker-Lewis index = 0.94; weighted root mean square residual = 0.65). The Italian Moral Distress Scale-Revised evinces good reliability (α = 0.81) and moderately correlates with Beck Depression Inventory-Second Edition (r = 0.293; p < 0.001). No significant differences were found in the moral distress total score between physicians and nurses. However, nurses scored higher on futile care than physicians (t = 2.051; p = 0.042), whereas physicians scored higher on deceptive communication than nurses (t = 3.617; p < 0.001). Moral distress was higher for those clinicians considering to give up their position (t = 2.778; p = 0.006). The Italian Moral Distress Scale-Revised is a valid and reliable instrument to assess moral distress among critical care clinicians and develop tailored interventions addressing its different components. Further research could test the generalizability of its factorial structure in other cultures.

  16. Overcoming spatio-temporal limitations using dynamically scaled in vitro PC-MRI - A flow field comparison to true-scale computer simulations of idealized, stented and patient-specific left main bifurcations.

    PubMed

    Beier, Susann; Ormiston, John; Webster, Mark; Cater, John; Norris, Stuart; Medrano-Gracia, Pau; Young, Alistair; Gilbert, Kathleen; Cowan, Brett

    2016-08-01

    The majority of patients with angina or heart failure have coronary artery disease. Left main bifurcations are particularly susceptible to pathological narrowing. Flow is a major factor of atheroma development, but limitations in imaging technology such as spatio-temporal resolution, signal-to-noise ratio (SNRv), and imaging artefacts prevent in vivo investigations. Computational fluid dynamics (CFD) modelling is a common numerical approach to study flow, but it requires a cautious and rigorous application for meaningful results. Left main bifurcation angles of 40°, 80° and 110° were found to represent the spread of an atlas based 100 computed tomography angiograms. Three left mains with these bifurcation angles were reconstructed with 1) idealized, 2) stented, and 3) patient-specific geometry. These were then approximately 7× scaled-up and 3D printing as large phantoms. Their flow was reproduced using a blood-analogous, dynamically scaled steady flow circuit, enabling in vitro phase-contrast magnetic resonance (PC-MRI) measurements. After threshold segmentation the image data was registered to true-scale CFD of the same coronary geometry using a coherent point drift algorithm, yielding a small covariance error (σ 2 <;5.8×10 -4 ). Natural-neighbour interpolation of the CFD data onto the PC-MRI grid enabled direct flow field comparison, showing very good agreement in magnitude (error 2-12%) and directional changes (r 2 0.87-0.91), and stent induced flow alternations were measureable for the first time. PC-MRI over-estimated velocities close to the wall, possibly due to partial voluming. Bifurcation shape determined the development of slow flow regions, which created lower SNRv regions and increased discrepancies. These can likely be minimised in future by testing different similarity parameters to reduce acquisition error and improve correlation further. It was demonstrated that in vitro large phantom acquisition correlates to true-scale coronary flow simulations when dynamically scaled, and thus can overcome current PC-MRI's spatio-temporal limitations. This novel method enables experimental assessment of stent induced flow alternations, and in future may elevate CFD coronary flow simulations by providing sophisticated boundary conditions, and enable investigations of stenosis phantoms.

  17. The validity and reliability of the type 2 diabetes and health promotion scale Turkish version: a methodological study.

    PubMed

    Yildiz, Esra; Kavuran, Esin

    2018-03-01

    A healthy promotion is important for maintaining health and preventing complications in patients with type 2 diabetes. The aim of the present study was to examine the psychometrics of a recently developed tool that can be used to screen for a health-promoting lifestyle in patients with type 2 diabetes. Data were collected from outpatients attending diabetes clinics. The Type 2 Diabetes and Health Promotion Scale (T2DHPS) and a demographic questionnaire were administered to 295 participants. Forward-backward translation of the original English version was used to develop a Turkish version. Internal consistency of the scale was assessed by Cronbach's alpha. An explanatory factor analysis and confirmatory factor analysis used validity of the Type 2 Diabetes and Health Promotion Scale - Turkish version. Kaiser-Meyer-Olkin (KMO) and Bartlett's sphericity tests showed that the sample met the criteria required for factor analysis. The reliability coefficient for the total scale was 0.84, and alpha coefficients for the subscales ranged from 0.57 to 0.92. A six-factor solution was obtained that explained 59.3% of the total variance. The ratio of chi-square statistics to degrees of freedom (χ 2 /df) 3.30 (χ 2 = 1157.48/SD = 350); error of root mean square approximation (RMSEA) 0.061; GFI value of 0.91 and comparative fit index (CFI) value was obtained as 0.91. Turkish version of The T2DHPS is a valid and reliable tool that can be used to assess patients' health-promoting lifestyle behaviours. Validity and reliability studies in different cultures and regions are recommended. © 2017 Nordic College of Caring Science.

  18. A physiologically based pharmacokinetic model to predict the pharmacokinetics of highly protein-bound drugs and the impact of errors in plasma protein binding.

    PubMed

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2016-04-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data were often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding and the blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate the model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for the terminal elimination half-life (t1/2 , 100% of drugs), peak plasma concentration (Cmax , 100%), area under the plasma concentration-time curve (AUC0-t , 95.4%), clearance (CLh , 95.4%), mean residence time (MRT, 95.4%) and steady state volume (Vss , 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Barriers to medication error reporting among hospital nurses.

    PubMed

    Rutledge, Dana N; Retrosi, Tina; Ostrowski, Gary

    2018-03-01

    The study purpose was to report medication error reporting barriers among hospital nurses, and to determine validity and reliability of an existing medication error reporting barriers questionnaire. Hospital medication errors typically occur between ordering of a medication to its receipt by the patient with subsequent staff monitoring. To decrease medication errors, factors surrounding medication errors must be understood; this requires reporting by employees. Under-reporting can compromise patient safety by disabling improvement efforts. This 2017 descriptive study was part of a larger workforce engagement study at a faith-based Magnet ® -accredited community hospital in California (United States). Registered nurses (~1,000) were invited to participate in the online survey via email. Reported here are sample demographics (n = 357) and responses to the 20-item medication error reporting barriers questionnaire. Using factor analysis, four factors that accounted for 67.5% of the variance were extracted. These factors (subscales) were labelled Fear, Cultural Barriers, Lack of Knowledge/Feedback and Practical/Utility Barriers; each demonstrated excellent internal consistency. The medication error reporting barriers questionnaire, originally developed in long-term care, demonstrated good validity and excellent reliability among hospital nurses. Substantial proportions of American hospital nurses (11%-48%) considered specific factors as likely reporting barriers. Average scores on most barrier items were categorised "somewhat unlikely." The highest six included two barriers concerning the time-consuming nature of medication error reporting and four related to nurses' fear of repercussions. Hospitals need to determine the presence of perceived barriers among nurses using questionnaires such as the medication error reporting barriers and work to encourage better reporting. Barriers to medication error reporting make it less likely that nurses will report medication errors, especially errors where patient harm is not apparent or where an error might be hidden. Such under-reporting impedes collection of accurate medication error data and prevents hospitals from changing harmful practices. © 2018 John Wiley & Sons Ltd.

  20. Medication administration error reporting and associated factors among nurses working at the University of Gondar referral hospital, Northwest Ethiopia, 2015.

    PubMed

    Bifftu, Berhanu Boru; Dachew, Berihun Assefa; Tiruneh, Bewket Tadesse; Beshah, Debrework Tesgera

    2016-01-01

    Medication administration is the final step/phase of medication process in which its error directly affects the patient health. Due to the central role of nurses in medication administration, whether they are the source of an error, a contributor, or an observer they have the professional, legal and ethical responsibility to recognize and report. The aim of this study was to assess the prevalence of medication administration error reporting and associated factors among nurses working at The University of Gondar Referral Hospital, Northwest Ethiopia. Institution based quantitative cross - sectional study was conducted among 282 Nurses. Data were collected using semi-structured, self-administered questionnaire of the Medication Administration Errors Reporting (MAERs). Binary logistic regression with 95 % confidence interval was used to identify factors associated with medication administration errors reporting. The estimated medication administration error reporting was found to be 29.1 %. The perceived rates of medication administration errors reporting for non-intravenous related medications were ranged from 16.8 to 28.6 % and for intravenous-related from 20.6 to 33.4 %. Education status (AOR =1.38, 95 % CI: 4.009, 11.128), disagreement over time - error definition (AOR = 0.44, 95 % CI: 0.468, 0.990), administrative reason (AOR = 0.35, 95 % CI: 0.168, 0.710) and fear (AOR = 0.39, 95 % CI: 0.257, 0.838) were factors statistically significant for the refusal of reporting medication administration errors at p-value <0.05. In this study, less than one third of the study participants reported medication administration errors. Educational status, disagreement over time - error definition, administrative reason and fear were factors statistically significant for the refusal of errors reporting at p-value <0.05. Therefore, the results of this study suggest strategies that enhance the cultures of error reporting such as providing a clear definition of reportable errors and strengthen the educational status of nurses by the health care organization.

  1. Characterization of errors in a coupled snow hydrology-microwave emission model

    USGS Publications Warehouse

    Andreadis, K.M.; Liang, D.; Tsang, L.; Lettenmaier, D.P.; Josberger, E.G.

    2008-01-01

    Traditional approaches to the direct estimation of snow properties from passive microwave remote sensing have been plagued by limitations such as the tendency of estimates to saturate for moderately deep snowpacks and the effects of mixed land cover within remotely sensed pixels. An alternative approach is to assimilate satellite microwave emission observations directly, which requires embedding an accurate microwave emissions model into a hydrologic prediction scheme, as well as quantitative information of model and observation errors. In this study a coupled snow hydrology [Variable Infiltration Capacity (VIC)] and microwave emission [Dense Media Radiative Transfer (DMRT)] model are evaluated using multiscale brightness temperature (TB) measurements from the Cold Land Processes Experiment (CLPX). The ability of VIC to reproduce snowpack properties is shown with the use of snow pit measurements, while TB model predictions are evaluated through comparison with Ground-Based Microwave Radiometer (GBMR), air-craft [Polarimetric Scanning Radiometer (PSR)], and satellite [Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E)] TB measurements. Limitations of the model at the point scale were not as evident when comparing areal estimates. The coupled model was able to reproduce the TB spatial patterns observed by PSR in two of three sites. However, this was mostly due to the presence of relatively dense forest cover. An interesting result occurs when examining the spatial scaling behavior of the higher-resolution errors; the satellite-scale error is well approximated by the mode of the (spatial) histogram of errors at the smaller scale. In addition, TB prediction errors were almost invariant when aggregated to the satellite scale, while forest-cover fractions greater than 30% had a significant effect on TB predictions. ?? 2008 American Meteorological Society.

  2. The Effects of Observation Errors on the Attack Vulnerability of Complex Networks

    DTIC Science & Technology

    2012-11-01

    more detail, to construct a true network we select a topology (erdos- renyi (Erdos & Renyi , 1959), scale-free (Barabási & Albert, 1999), small world...Efficiency of Scale-Free Networks: Error and Attack Tolerance. Physica A, Volume 320, pp. 622-642. 6. Erdos, P. & Renyi , A., 1959. On Random Graphs, I

  3. Examiner Errors on the Reynolds Intellectual Assessment Scales Committed by Graduate Student Examiners

    ERIC Educational Resources Information Center

    Loe, Scott A.

    2014-01-01

    Protocols from 108 administrations of the Reynolds Intellectual Assessment Scales were evaluated to determine the frequency of examiner errors and their impact on the accuracy of three test composite scores, the Composite Ability Index (CIX), Verbal Ability Index (VIX), and Nonverbal Ability Index (NIX). Students committed at least one…

  4. Simulation study of a geometric shape factor technique for estimating earth-emitted radiant flux densities from wide-field-of-view radiation measurements

    NASA Technical Reports Server (NTRS)

    Weaver, W. L.; Green, R. N.

    1980-01-01

    Geometric shape factors were computed and applied to satellite simulated irradiance measurements to estimate Earth emitted flux densities for global and zonal scales and for areas smaller than the detector field of view (FOV). Wide field of view flat plate detectors were emphasized, but spherical detectors were also studied. The radiation field was modeled after data from the Nimbus 2 and 3 satellites. At a satellite altitude of 600 km, zonal estimates were in error 1.0 to 1.2 percent and global estimates were in error less than 0.2 percent. Estimates with unrestricted field of view (UFOV) detectors were about the same for Lambertian and limb darkening radiation models. The opposite was found for restricted field of view detectors. The UFOV detectors are found to be poor estimators of flux density from the total FOV and are shown to be much better as estimators of flux density from a circle centered at the FOV with an area significantly smaller than that for the total FOV.

  5. Thermal Property Analysis of Axle Load Sensors for Weighing Vehicles in Weigh-in-Motion System

    PubMed Central

    Burnos, Piotr; Gajda, Janusz

    2016-01-01

    Systems which permit the weighing of vehicles in motion are called dynamic Weigh-in-Motion scales. In such systems, axle load sensors are embedded in the pavement. Among the influencing factors that negatively affect weighing accuracy is the pavement temperature. This paper presents a detailed analysis of this phenomenon and describes the properties of polymer, quartz and bending plate load sensors. The studies were conducted in two ways: at roadside Weigh-in-Motion sites and at a laboratory using a climate chamber. For accuracy assessment of roadside systems, the reference vehicle method was used. The pavement temperature influence on the weighing error was experimentally investigated as well as a non-uniform temperature distribution along and across the Weigh-in-Motion site. Tests carried out in the climatic chamber allowed the influence of temperature on the sensor intrinsic error to be determined. The results presented clearly show that all kinds of sensors are temperature sensitive. This is a new finding, as up to now the quartz and bending plate sensors were considered insensitive to this factor. PMID:27983704

  6. Quantum computation with realistic magic-state factories

    NASA Astrophysics Data System (ADS)

    O'Gorman, Joe; Campbell, Earl T.

    2017-03-01

    Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.

  7. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    PubMed

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  8. Towards a better prediction of peak concentration, volume of distribution and half-life after oral drug administration in man, using allometry.

    PubMed

    Sinha, Vikash K; Vaarties, Karin; De Buck, Stefan S; Fenu, Luca A; Nijsen, Marjoleen; Gilissen, Ron A H J; Sanderson, Wendy; Van Uytsel, Kelly; Hoeben, Eva; Van Peer, Achiel; Mackie, Claire E; Smit, Johan W

    2011-05-01

    It is imperative that new drugs demonstrate adequate pharmacokinetic properties, allowing an optimal safety margin and convenient dosing regimens in clinical practice, which then lead to better patient compliance. Such pharmacokinetic properties include suitable peak (maximum) plasma drug concentration (C(max)), area under the plasma concentration-time curve (AUC) and a suitable half-life (t(½)). The C(max) and t(½) following oral drug administration are functions of the oral clearance (CL/F) and apparent volume of distribution during the terminal phase by the oral route (V(z)/F), each of which may be predicted and combined to estimate C(max) and t(½). Allometric scaling is a widely used methodology in the pharmaceutical industry to predict human pharmacokinetic parameters such as clearance and volume of distribution. In our previous published work, we have evaluated the use of allometry for prediction of CL/F and AUC. In this paper we describe the evaluation of different allometric scaling approaches for the prediction of C(max), V(z)/F and t(½) after oral drug administration in man. Twenty-nine compounds developed at Janssen Research and Development (a division of Janssen Pharmaceutica NV), covering a wide range of physicochemical and pharmacokinetic properties, were selected. The C(max) following oral dosing of a compound was predicted using (i) simple allometry alone; (ii) simple allometry along with correction factors such as plasma protein binding (PPB), maximum life-span potential or brain weight (reverse rule of exponents, unbound C(max) approach); and (iii) an indirect approach using allometrically predicted CL/F and V(z)/F and absorption rate constant (k(a)). The k(a) was estimated from (i) in vivo pharmacokinetic experiments in preclinical species; and (ii) predicted effective permeability in man (P(eff)), using a Caco-2 permeability assay. The V(z)/F was predicted using allometric scaling with or without PPB correction. The t(½) was estimated from the allometrically predicted parameters CL/F and V(z)/F. Predictions were deemed adequate when errors were within a 2-fold range. C(max) and t(½) could be predicted within a 2-fold error range for 59% and 66% of the tested compounds, respectively, using allometrically predicted CL/F and V(z)/F. The best predictions for C(max) were obtained when k(a) values were calculated from the Caco-2 permeability assay. The V(z)/F was predicted within a 2-fold error range for 72% of compounds when PPB correction was applied as the correction factor for scaling. We conclude that (i) C(max) and t(½) are best predicted by indirect scaling approaches (using allometrically predicted CL/F and V(z)/F and accounting for k(a) derived from permeability assay); and (ii) the PPB is an important correction factor for the prediction of V(z)/F by using allometric scaling. Furthermore, additional work is warranted to understand the mechanisms governing the processes underlying determination of C(max) so that the empirical approaches can be fine-tuned further.

  9. An application of the LC-LSTM framework to the self-esteem instability case.

    PubMed

    Alessandri, Guido; Vecchione, Michele; Donnellan, Brent M; Tisak, John

    2013-10-01

    The present research evaluates the stability of self-esteem as assessed by a daily version of the Rosenberg (Society and the adolescent self-image, Princeton University Press, Princeton, 1965) general self-esteem scale (RGSE). The scale was administered to 391 undergraduates for five consecutive days. The longitudinal data were analyzed using the integrated LC-LSTM framework that allowed us to evaluate: (1) the measurement invariance of the RGSE, (2) its stability and change across the 5-day assessment period, (3) the amount of variance attributable to stable and transitory latent factors, and (4) the criterion-related validity of these factors. Results provided evidence for measurement invariance, mean-level stability, and rank-order stability of daily self-esteem. Latent state-trait analyses revealed that variances in scores of the RGSE can be decomposed into six components: stable self-esteem (40 %), ephemeral (or temporal-state) variance (36 %), stable negative method variance (9 %), stable positive method variance (4 %), specific variance (1 %) and random error variance (10 %). Moreover, latent factors associated with daily self-esteem were associated with measures of depression, implicit self-esteem, and grade point average.

  10. A superconducting gravity gradiometer for measurements from a moving vehicle.

    PubMed

    Moody, M V

    2011-09-01

    A gravity gradiometer designed for operation on an aircraft or ship has been tested in the laboratory. A noise level of 0.53 E (E ≡ 10(-9) s(-2)) rms over a 0.001 to 1 Hz bandwidth has been measured, and the primary error mechanisms have been analyzed and quantified. The design is a continuation in the development of superconducting accelerometer technology at the University of Maryland over more than three decades. A cryogenic instrument presents not only the benefit of reduced thermal noise, but also, the extraordinary stability of superconducting circuits and material properties at very low temperatures. This stability allows precise matching of scale factors and accurate rejection of dynamic errors. The design of the instrument incorporates a number of additional features that further enhance performance in a dynamically noisy environment. © 2011 American Institute of Physics

  11. [Epidemiology of refractive errors].

    PubMed

    Wolfram, C

    2017-07-01

    Refractive errors are very common and can lead to severe pathological changes in the eye. This article analyzes the epidemiology of refractive errors in the general population in Germany and worldwide and describes common definitions for refractive errors and clinical characteristics for pathologicaal changes. Refractive errors differ between age groups due to refractive changes during the life time and also due to generation-specific factors. Current research about the etiology of refractive errors has strengthened the influence of environmental factors, which led to new strategies for the prevention of refractive pathologies.

  12. Reliability and validity of a scale for health-promoting schools.

    PubMed

    Lee, Eun Young; Shin, Young-Jeon; Choi, Bo Youl; Cho, Ho Soon Michelle

    2014-12-01

    Despite a growing body of research regarding the health-promoting schools (HPS) concept from the World Health Organization (WHO), research on measuring of the HPS is limited. This study aims to develop a scale for assessing the status of the HPS based on the WHO guidelines and to evaluate the reliability and validity of the scale. After completing the translation and back-translation process, the content validity of the 50-item scale for HPS (SHPS) was assessed by an expert committee review and pretested with 17 teachers. A stratified, random sampling design was used. A total of 728 teachers from 94 schools completed a self-administered questionnaire. The total sample was randomly divided into three groups for exploratory factor analysis (EFA), confirmatory factor analysis (CFA) and cross-validation. The EFA suggested seven factors, including 37 items, and the CFA confirmed these factors. In a second-order factor analysis, the second-order seven-factor model had acceptable fit indices (root mean square error of approximation 0.07, comparative fit index 0.98) with stability over validation sample and whole sample. Thus, the first-order seven factors (school nutrition services [three-item, α = 0.87], healthy school policies [six-item, α = 0.87], school's physical environment [10-item, α = 0.91], school's social environment [four-item, α = 0.88], community links [six-item, α = 0.91], individual health skills and action competencies [three-item, α = 0.89], and health services [five-item, α = 0.86]) loaded significantly onto the second-order factor (HPS [37-item, α = 0.97]). In conclusion, the SHPS is a reliable and valid measurement tool for assessing the states of the HPS in the Korean school context. It will be useful for comprehensively assessing schools' needs and monitoring the progress of school health interventions. © The Author (2013). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  14. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  15. Factors effective on medication errors: A nursing view.

    PubMed

    Shahrokhi, Akram; Ebrahimpour, Fatemeh; Ghodousi, Arash

    2013-01-01

    Medication errors are the most common medical errors, which may result in some complications for patients. This study was carried out to investigate what influence medication errors by nurses from their viewpoint. In this descriptive study, 150 nurses who were working in Qazvin Medical University teaching hospitals were selected by proportional random sampling, and data were collected by means of a researcher-made questionnaire including demographic attributes (age, gender, working experience,…), and contributing factors in medication errors (in three categories including nurse-related, management-related, and environment-related factors). The mean age of the participant nurses was 30.7 ± 6.5 years. Most of them (87.1%) were female with a Bachelor of Sciences degree (86.7%) in nursing. The mean of their overtime working was 64.8 ± 38 h/month. The results showed that the nurse-related factors are the most effective factors (55.44 ± 9.14) while the factors related to the management system (52.84 ± 11.24) and the ward environment (44.0 ± 10.89) are respectively less effective. The difference between these three groups was significant (P = 0.000). In each aforementioned category, the most effective factor on medication error (ranked from the most effective to the least effective) were as follow: The nurse's inadequate attention (98.7%), the errors occurring in the transfer of medication orders from the patient's file to kardex (96.6%) and the ward's heavy workload (86.7%). In this study nurse-related factors were the most effective factors on medication errors, but nurses are one of the members of health-care providing team, so their performance must be considered in the context of the health-care system like work force condition, rules and regulations, drug manufacturing that might impact nurses performance, so it could not be possible to prevent medication errors without paying attention to our health-care system in a holistic approach.

  16. Factors effective on medication errors: A nursing view

    PubMed Central

    Shahrokhi, Akram; Ebrahimpour, Fatemeh; Ghodousi, Arash

    2013-01-01

    Objective: Medication errors are the most common medical errors, which may result in some complications for patients. This study was carried out to investigate what influence medication errors by nurses from their viewpoint. Methods: In this descriptive study, 150 nurses who were working in Qazvin Medical University teaching hospitals were selected by proportional random sampling, and data were collected by means of a researcher-made questionnaire including demographic attributes (age, gender, working experience,…), and contributing factors in medication errors (in three categories including nurse-related, management-related, and environment-related factors). Findings: The mean age of the participant nurses was 30.7 ± 6.5 years. Most of them (87.1%) were female with a Bachelor of Sciences degree (86.7%) in nursing. The mean of their overtime working was 64.8 ± 38 h/month. The results showed that the nurse-related factors are the most effective factors (55.44 ± 9.14) while the factors related to the management system (52.84 ± 11.24) and the ward environment (44.0 ± 10.89) are respectively less effective. The difference between these three groups was significant (P = 0.000). In each aforementioned category, the most effective factor on medication error (ranked from the most effective to the least effective) were as follow: The nurse's inadequate attention (98.7%), the errors occurring in the transfer of medication orders from the patient's file to kardex (96.6%) and the ward's heavy workload (86.7%). Conclusion: In this study nurse-related factors were the most effective factors on medication errors, but nurses are one of the members of health-care providing team, so their performance must be considered in the context of the health-care system like work force condition, rules and regulations, drug manufacturing that might impact nurses performance, so it could not be possible to prevent medication errors without paying attention to our health-care system in a holistic approach. PMID:24991599

  17. Transferring Error Characteristics of Satellite Rainfall Data from Ground Validation (gauged) into Non-ground Validation (ungauged)

    NASA Astrophysics Data System (ADS)

    Tang, L.; Hossain, F.

    2009-12-01

    Understanding the error characteristics of satellite rainfall data at different spatial/temporal scales is critical, especially when the scheduled Global Precipitation Mission (GPM) plans to provide High Resolution Precipitation Products (HRPPs) at global scales. Satellite rainfall data contain errors which need ground validation (GV) data for characterization, while satellite rainfall data will be most useful in the regions that are lacking in GV. Therefore, a critical step is to develop a spatial interpolation scheme for transferring the error characteristics of satellite rainfall data from GV regions to Non-GV regions. As a prelude to GPM, The TRMM Multi-satellite Precipitation Analysis (TMPA) products of 3B41RT and 3B42RT (Huffman et al., 2007) over the US spanning a record of 6 years are used as a representative example of satellite rainfall data. Next Generation Radar (NEXRAD) Stage IV rainfall data are used as the reference for GV data. Initial work by the authors (Tang et al., 2009, GRL) has shown promise in transferring error from GV to Non-GV regions, based on a six-year climatologic average of satellite rainfall data assuming only 50% of GV coverage. However, this transfer of error characteristics needs to be investigated for a range of GV data coverage. In addition, it is also important to investigate if proxy-GV data from an accurate space-borne sensor, such as the TRMM PR (or the GPM DPR), can be leveraged for the transfer of error at sparsely gauged regions. The specific question we ask in this study is, “what is the minimum coverage of GV data required for error transfer scheme to be implemented at acceptable accuracy in hydrological relevant scale?” Three geostatistical interpolation methods are compared: ordinary kriging, indicator kriging and disjunctive kriging. Various error metrics are assessed for transfer such as, Probability of Detection for rain and no rain, False Alarm Ratio, Frequency Bias, Critical Success Index, RMSE etc. Understanding the proper space-time scales at which these metrics can be reasonably transferred is also explored in this study. Keyword: Satellite rainfall, error transfer, spatial interpolation, kriging methods.

  18. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    PubMed Central

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  19. Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals.

    PubMed

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris; Bjørn, Brian; Lilja, Beth; Mogensen, Torben

    2011-03-01

    Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions and characteristics of verbal communication errors such as handover errors and error during teamwork. Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13 (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between units and consults from other specialties, were particularly vulnerable processes. With the risk of bias in mind, it is concluded that more than half of the RCARs described erroneous verbal communication between staff members as root causes of or contributing factors of severe patient safety incidents. The RCARs rich descriptions of the incidents revealed the organisational factors and needs related to these errors.

  20. Accelerating Convergence in Molecular Dynamics Simulations of Solutes in Lipid Membranes by Conducting a Random Walk along the Bilayer Normal.

    PubMed

    Neale, Chris; Madill, Chris; Rauscher, Sarah; Pomès, Régis

    2013-08-13

    All molecular dynamics simulations are susceptible to sampling errors, which degrade the accuracy and precision of observed values. The statistical convergence of simulations containing atomistic lipid bilayers is limited by the slow relaxation of the lipid phase, which can exceed hundreds of nanoseconds. These long conformational autocorrelation times are exacerbated in the presence of charged solutes, which can induce significant distortions of the bilayer structure. Such long relaxation times represent hidden barriers that induce systematic sampling errors in simulations of solute insertion. To identify optimal methods for enhancing sampling efficiency, we quantitatively evaluate convergence rates using generalized ensemble sampling algorithms in calculations of the potential of mean force for the insertion of the ionic side chain analog of arginine in a lipid bilayer. Umbrella sampling (US) is used to restrain solute insertion depth along the bilayer normal, the order parameter commonly used in simulations of molecular solutes in lipid bilayers. When US simulations are modified to conduct random walks along the bilayer normal using a Hamiltonian exchange algorithm, systematic sampling errors are eliminated more rapidly and the rate of statistical convergence of the standard free energy of binding of the solute to the lipid bilayer is increased 3-fold. We compute the ratio of the replica flux transmitted across a defined region of the order parameter to the replica flux that entered that region in Hamiltonian exchange simulations. We show that this quantity, the transmission factor, identifies sampling barriers in degrees of freedom orthogonal to the order parameter. The transmission factor is used to estimate the depth-dependent conformational autocorrelation times of the simulation system, some of which exceed the simulation time, and thereby identify solute insertion depths that are prone to systematic sampling errors and estimate the lower bound of the amount of sampling that is required to resolve these sampling errors. Finally, we extend our simulations and verify that the conformational autocorrelation times estimated by the transmission factor accurately predict correlation times that exceed the simulation time scale-something that, to our knowledge, has never before been achieved.

  1. Validation of the Spanish version of the Amsterdam Preoperative Anxiety and Information Scale (APAIS).

    PubMed

    Vergara-Romero, Manuel; Morales-Asencio, José Miguel; Morales-Fernández, Angelines; Canca-Sanchez, Jose Carlos; Rivas-Ruiz, Francisco; Reinaldo-Lapuerta, Jose Antonio

    2017-06-07

    Preoperative anxiety is a frequent and challenging problem with deleterious effects on the development of surgical procedures and postoperative outcomes. To prevent and treat preoperative anxiety effectively, the level of anxiety of patients needs to be assessed through valid and reliable measuring instruments. One such measurement tool is the Amsterdam Preoperative Anxiety and Information Scale (APAIS), of which a Spanish version has not been validated yet. To perform a Spanish cultural adaptation and empirical validation of the APAIS for assessing preoperative anxiety in the Spanish population. A two-step forward/back translation of the APAIS scale was performed to ensure a reliable Spanish cultural adaptation. The final Spanish version of the APAIS questionnaire was administered to 529 patients between the ages of 18 to 70 undergoing elective surgery at hospitals of the Agencia Sanitaria Costa del Sol (Spain). Cronbach's alpha, homogeneity index, intra-class correlation coefficient, and confirmatory factor analysis were calculated to assess internal consistency and criteria and construct validity. Confirmatory factor analysis showed that a one-factor model was better fitted than a two-factor model, with good fitting patterns (root mean square error of approximation: 0.05, normed-fit index: 0.99, goodness-of-fit statistic: 0.99). The questionnaire showed high internal consistency (Cronbach's alpha: 0.84) and a good correlation with the Goldberg Anxiety Scale (CCI: 0.62 (95% CI: 0.55 to 0.68). The Spanish version of the APAIS is a valid and reliable preoperative anxiety measurement tool and shows psychometric properties similar to those obtained by similar previous studies.

  2. The 7-item generalized anxiety disorder scale as a tool for measuring generalized anxiety in multiple sclerosis.

    PubMed

    Terrill, Alexandra L; Hartoonian, Narineh; Beier, Meghan; Salem, Rana; Alschuler, Kevin

    2015-01-01

    Generalized anxiety disorder (GAD) is common in multiple sclerosis (MS) but understudied. Reliable and valid measures are needed to advance clinical care and expand research in this area. The objectives of this study were to examine the psychometric properties of the 7-item Generalized Anxiety Disorder Scale (GAD-7) in individuals with MS and to analyze correlates of GAD. Participants (N = 513) completed the anxiety module of the Patient Health Questionnaire (GAD-7). To evaluate psychometric properties of the GAD-7, the sample was randomly split to conduct exploratory and confirmatory factor analyses. Based on the exploratory factor analysis, a one-factor structure was specified for the confirmatory factor analysis, which showed excellent global fit to the data (χ(2) 12 = 15.17, P = .23, comparative fit index = 0.99, root mean square error of approximation = 0.03, standardized root mean square residual = 0.03). The Cronbach alpha (0.75) indicated acceptable internal consistency for the scale. Furthermore, the GAD-7 was highly correlated with the Hospital Anxiety and Depression Scale-Anxiety (r = 0.70). Age and duration of MS were both negatively associated with GAD. Higher GAD-7 scores were observed in women and individuals with secondary progressive MS. Individuals with higher GAD-7 scores also endorsed more depressive symptoms. These findings support the reliability and internal validity of the GAD-7 for use in MS. Correlational analyses revealed important relationships with demographics, disease course, and depressive symptoms, which suggest the need for further anxiety research.

  3. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  4. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  5. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    PubMed

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  6. Fast Light Optical Gyroscopes

    NASA Technical Reports Server (NTRS)

    Smith, David D.

    2015-01-01

    Next-generation space missions are currently constrained by existing spacecraft navigation systems which are not fully autonomous. These systems suffer from accumulated dead-reckoning errors and must therefore rely on periodic corrections provided by supplementary technologies that depend on line-of-sight signals from Earth, satellites, or other celestial bodies for absolute attitude and position determination, which can be spoofed, incorrectly identified, occluded, obscured, attenuated, or insufficiently available. These dead-reckoning errors originate in the ring laser gyros themselves, which constitute inertial measurement units. Increasing the time for standalone spacecraft navigation therefore requires fundamental improvements in gyroscope technologies. One promising solution to enhance gyro sensitivity is to place an anomalous dispersion or fast light material inside the gyro cavity. The fast light essentially provides a positive feedback to the gyro response, resulting in a larger measured beat frequency for a given rotation rate as shown in figure 1. Game Changing Development has been investing in this idea through the Fast Light Optical Gyros (FLOG) project, a collaborative effort which began in FY 2013 between NASA Marshall Space Flight Center (MSFC), the U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC), and Northwestern University. MSFC and AMRDEC are working on the development of a passive FLOG (PFLOG), while Northwestern is developing an active FLOG (AFLOG). The project has demonstrated new benchmarks in the state of the art for scale factor sensitivity enhancement. Recent results show cavity scale factor enhancements of approx.100 for passive cavities.

  7. Estimating terrestrial aboveground biomass estimation using lidar remote sensing: a meta-analysis

    NASA Astrophysics Data System (ADS)

    Zolkos, S. G.; Goetz, S. J.; Dubayah, R.

    2012-12-01

    Estimating biomass of terrestrial vegetation is a rapidly expanding research area, but also a subject of tremendous interest for reducing carbon emissions associated with deforestation and forest degradation (REDD). The accuracy of biomass estimates is important in the context carbon markets emerging under REDD, since areas with more accurate estimates command higher prices, but also for characterizing uncertainty in estimates of carbon cycling and the global carbon budget. There is particular interest in mapping biomass so that carbon stocks and stock changes can be monitored consistently across a range of scales - from relatively small projects (tens of hectares) to national or continental scales - but also so that other benefits of forest conservation can be factored into decision making (e.g. biodiversity and habitat corridors). We conducted an analysis of reported biomass accuracy estimates from more than 60 refereed articles using different remote sensing platforms (aircraft and satellite) and sensor types (optical, radar, lidar), with a particular focus on lidar since those papers reported the greatest efficacy (lowest errors) when used in the a synergistic manner with other coincident multi-sensor measurements. We show systematic differences in accuracy between different types of lidar systems flown on different platforms but, perhaps more importantly, differences between forest types (biomes) and plot sizes used for field calibration and assessment. We discuss these findings in relation to monitoring, reporting and verification under REDD, and also in the context of more systematic assessment of factors that influence accuracy and error estimation.

  8. Factorial invariance of child self-report across age subgroups: a confirmatory factor analysis of ages 5 to 16 years utilizing the PedsQL 4.0 Generic Core Scales.

    PubMed

    Limbers, Christine A; Newman, Daniel A; Varni, James W

    2008-01-01

    The utilization of health-related quality of life (HRQOL) measurement in an effort to improve pediatric health and well-being and determine the value of health care services has grown dramatically over the past decade. The paradigm shift toward patient-reported outcomes (PROs) in clinical trials has provided the opportunity to emphasize the value and essential need for pediatric patient self-report. In order for HRQOL/PRO comparisons to be meaningful for subgroup analyses, it is essential to demonstrate factorial invariance. This study examined age subgroup factorial invariance of child self-report for ages 5 to 16 years on more than 8,500 children utilizing the PedsQL 4.0 Generic Core Scales. Multigroup Confirmatory Factor Analysis (MGCFA) was performed specifying a five-factor model. Two multigroup structural equation models, one with constrained parameters and the other with unconstrained parameters, were proposed to compare the factor loadings across the age subgroups. Metric invariance (i.e., equal factor loadings) across the age subgroups was demonstrated based on stability of the Comparative Fit Index between the two models, and several additional indices of practical fit including the Root Mean Squared Error of Approximation, the Non-Normed Fit Index, and the Parsimony Normed Fit Index. The findings support an equivalent five-factor structure across the age subgroups. Based on these data, it can be concluded that children across the age subgroups in this study interpreted items on the PedsQL 4.0 Generic Core Scales in a similar manner regardless of their age.

  9. Nurse perceptions of organizational culture and its association with the culture of error reporting: a case of public sector hospitals in Pakistan.

    PubMed

    Jafree, Sara Rizvi; Zakar, Rubeena; Zakar, Muhammad Zakria; Fischer, Florian

    2016-01-05

    There is an absence of formal error tracking systems in public sector hospitals of Pakistan and also a lack of literature concerning error reporting culture in the health care sector. Nurse practitioners have front-line knowledge and rich exposure about both the organizational culture and error sharing in hospital settings. The aim of this paper was to investigate the association between organizational culture and the culture of error reporting, as perceived by nurses. The authors used the "Practice Environment Scale-Nurse Work Index Revised" to measure the six dimensions of organizational culture. Seven questions were used from the "Survey to Solicit Information about the Culture of Reporting" to measure error reporting culture in the region. Overall, 309 nurses participated in the survey, including female nurses from all designations such as supervisors, instructors, ward-heads, staff nurses and student nurses. We used SPSS 17.0 to perform a factor analysis. Furthermore, descriptive statistics, mean scores and multivariable logistic regression were used for the analysis. Three areas were ranked unfavorably by nurse respondents, including: (i) the error reporting culture, (ii) staffing and resource adequacy, and (iii) nurse foundations for quality of care. Multivariable regression results revealed that all six categories of organizational culture, including: (1) nurse manager ability, leadership and support, (2) nurse participation in hospital affairs, (3) nurse participation in governance, (4) nurse foundations of quality care, (5) nurse-coworkers relations, and (6) nurse staffing and resource adequacy, were positively associated with higher odds of error reporting culture. In addition, it was found that married nurses and nurses on permanent contract were more likely to report errors at the workplace. Public healthcare services of Pakistan can be improved through the promotion of an error reporting culture, reducing staffing and resource shortages and the development of nursing care plans.

  10. Design of Instrument Dials for Maximum Legibility. Part 5. Origin Location, Scale Break, Number Location, and Contrast Direction

    DTIC Science & Technology

    1951-05-01

    prccedur&:s to be of hipn accuracy. Ambij;uity of subject responizes due to overlap of entries on tU,, record sheets vas negligible. Handwriting ...experimental variables on reading errors us carried out by analysis of variance methods. For this purpose it was convenient to consider different classes...on any scale - an error ofY one numbered division. For this reason, the result. of the analysis of variance of the /10’s errors by dial types may

  11. Functional Independent Scaling Relation for ORR/OER Catalysts

    DOE PAGES

    Christensen, Rune; Hansen, Heine A.; Dickens, Colin F.; ...

    2016-10-11

    A widely used adsorption energy scaling relation between OH* and OOH* intermediates in the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), has previously been determined using density functional theory and shown to dictate a minimum thermodynamic overpotential for both reactions. Here, we show that the oxygen–oxygen bond in the OOH* intermediate is, however, not well described with the previously used class of exchange-correlation functionals. By quantifying and correcting the systematic error, an improved description of gaseous peroxide species versus experimental data and a reduction in calculational uncertainty is obtained. For adsorbates, we find that the systematic error largelymore » cancels the vdW interaction missing in the original determination of the scaling relation. An improved scaling relation, which is fully independent of the applied exchange–correlation functional, is obtained and found to differ by 0.1 eV from the original. Lastly, this largely confirms that, although obtained with a method suffering from systematic errors, the previously obtained scaling relation is applicable for predictions of catalytic activity.« less

  12. A study of complex scaling transformation using the Wigner representation of wavefunctions.

    PubMed

    Kaprálová-Ždánská, Petra Ruth

    2011-05-28

    The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics

  13. Does the Assessment of Recovery Capital scale reflect a single or multiple domains?

    PubMed

    Arndt, Stephan; Sahker, Ethan; Hedden, Suzy

    2017-01-01

    The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker's congruence coefficients between the factor structure and defining weights (0.41-0.52) suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93), accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains.

  14. Fine-scale landscape genetics of the American badger (Taxidea taxus): disentangling landscape effects and sampling artifacts in a poorly understood species

    PubMed Central

    Kierepka, E M; Latch, E K

    2016-01-01

    Landscape genetics is a powerful tool for conservation because it identifies landscape features that are important for maintaining genetic connectivity between populations within heterogeneous landscapes. However, using landscape genetics in poorly understood species presents a number of challenges, namely, limited life history information for the focal population and spatially biased sampling. Both obstacles can reduce power in statistics, particularly in individual-based studies. In this study, we genotyped 233 American badgers in Wisconsin at 12 microsatellite loci to identify alternative statistical approaches that can be applied to poorly understood species in an individual-based framework. Badgers are protected in Wisconsin owing to an overall lack in life history information, so our study utilized partial redundancy analysis (RDA) and spatially lagged regressions to quantify how three landscape factors (Wisconsin River, Ecoregions and land cover) impacted gene flow. We also performed simulations to quantify errors created by spatially biased sampling. Statistical analyses first found that geographic distance was an important influence on gene flow, mainly driven by fine-scale positive spatial autocorrelations. After controlling for geographic distance, both RDA and regressions found that Wisconsin River and Agriculture were correlated with genetic differentiation. However, only Agriculture had an acceptable type I error rate (3–5%) to be considered biologically relevant. Collectively, this study highlights the benefits of combining robust statistics and error assessment via simulations and provides a method for hypothesis testing in individual-based landscape genetics. PMID:26243136

  15. Constraining biosphere CO2 flux at regional scale with WRF-CO2 4DVar assimilation system

    NASA Astrophysics Data System (ADS)

    Zheng, T.

    2017-12-01

    The WRF-CO2 4DVar assimilation system is updated to include (1) operators for tower based observations (2) chemistry initial and boundary condition in the state vector (3) mechanism for aggregation from simulation model grid to state vector space. The update system is first tested with synthetic data to ensure its accuracy. The system is then used to test regional scale CO2 inversion at MCI (Midcontinental intensive) sites where CO2 mole fraction data were collected at multiple high towers during 2007-2008. The model domain is set to center on Iowa and include 8 towers within its boundary, and it is of 12x12km horizontal grid spacing. First, the relative impacts of the initial and boundary condition are assessed by the system's adjoint model. This is done with 24, 48, 72 hour time span. Second, we assessed the impacts of the transport error, including the misrepresentation of the boundary layer and cumulus activities. Third, we evaluated the different aggregation approach from the native model grid to the control variables (including scaling factors for flux, initial and boundary conditions). Four, we assessed the inversion performance using CO2 observation with different time-interval, and from different tower levels. We also examined the appropriate treatment of the background and observation error covariance in relation with these varying observation data sets.

  16. Standard Errors for National Trends in International Large-Scale Assessments in the Case of Cross-National Differential Item Functioning

    ERIC Educational Resources Information Center

    Sachse, Karoline A.; Haag, Nicole

    2017-01-01

    Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…

  17. Conditional standard errors of measurement for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

    PubMed

    Price, Larry R; Raju, Nambury; Lurie, Anna; Wilkins, Charles; Zhu, Jianjun

    2006-02-01

    A specific recommendation of the 1999 Standards for Educational and Psychological Testing by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education is that test publishers report estimates of the conditional standard error of measurement (SEM). Procedures for calculating the conditional (score-level) SEM based on raw scores are well documented; however, few procedures have been developed for estimating the conditional SEM of subtest or composite scale scores resulting from a nonlinear transformation. Item response theory provided the psychometric foundation to derive the conditional standard errors of measurement and confidence intervals for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

  18. Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.

    PubMed

    Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean

    2017-06-01

    Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.

  19. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  20. Psychometric properties of the communication skills attitude scale (CSAS) measure in a sample of Iranian medical students

    PubMed Central

    YAKHFOROSHHA, AFSANEH; SHIRAZI, MANDANA; YOUSEFZADEH, NASER; GHANBARNEJAD, AMIN; CHERAGHI, MOHAMMADALI; MOJTAHEDZADEH, RITA; MAHMOODI-BAKHTIARI, BEHROOZ; EMAMI, SEYED AMIR HOSSEIN

    2018-01-01

    Introduction: Communication skill (CS) has been regarded as one of the fundamental competencies for medical and other health care professionals. Student's attitude toward learning CS is a key factor in designing educational interventions. The original CSAS, as positive and negative subscales, was developed in the UK; however, there is no scale to measure these attitudes in Iran. The aim of this study was to assess the psychometric characteristic of the Communication Skills Attitude Scale (CSAS), in an Iranian context and to understand if it is a valid tool to assess attitude toward learning communication skills among health care professionals. Methods: Psychometric characteristics of the CSAS were assessed by using a cross-sectional design. In the current study, 410 medical students were selected using stratified sampling framework. The face validity of the scale was estimated through students and experts’ opinion. Content validity of CSAS was assessed qualitatively and quantitatively. Reliability was examined through two methods including Chronbach’s alpha coefficient and Intraclass Correlation of Coefficient (ICC). Construct validity of CSAS was assessed using confirmatory factor analysis (CFA) and explanatory factor analysis (PCA) followed by varimax rotation. Convergent and discriminant validity of the scale was measured through Spearman correlation. Statistical analysis was performed using SPSS 19 and EQS, 6.1. Results: The internal consistency and reproducibility of the total CSAS score were 0.84 (Cronbach’s alpha) and 0.81, which demonstrates an acceptable reliability of the questionnaire. The item-level content validity index (I-CVI) and the scale-level content validity index (S-CVI/Ave) demonstrated appropriate results: 0.97 and 0.94, respectively. An exploratory factor analysis (EFA) on the 25 items of the CSAS revealed 4-factor structure that all together explained %55 of the variance. Results of the confirmatory factor analysis indicated an acceptable goodness-of-fit between the model and the observed data. [χ2/df = 2.36, Comparative Fit Index (CFI) = 0.95, the GFI=0.96, Root Mean Square Error of Approximation (RMSEA) = 0.05]. Conclusion: The Persian version of CSAS is a multidimensional, valid and reliable tool for assessing attitudes towards communication skill among medical students. PMID:29344525

  1. A new method for testing the scale-factor performance of fiber optical gyroscope

    NASA Astrophysics Data System (ADS)

    Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin

    2015-10-01

    Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.

  2. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  3. Constructivism, Factoring, and Beliefs.

    ERIC Educational Resources Information Center

    Rauff, James V.

    1994-01-01

    Discusses errors made by remedial intermediate algebra students in factoring polynomials in light of student definitions of factoring. Found certain beliefs about factoring to logically imply many of the errors made. Suggests that belief-based teaching can be successful in teaching factoring. (16 references) (Author/MKR)

  4. Development and Validation of the Human Papillomavirus Attitudes and Beliefs Scale in a National Canadian Sample.

    PubMed

    Perez, Samara; Shapiro, Gilla K; Tatar, Ovidiu; Joyal-Desmarais, Keven; Rosberger, Zeev

    2016-10-01

    Parents' human papillomavirus (HPV) vaccination decision-making is strongly influenced by their attitudes and beliefs toward vaccination. To date, psychometrically evaluated HPV vaccination attitudes scales have been narrow in their range of measured beliefs and often limited to attitudes surrounding female HPV vaccination. The study aimed to develop a comprehensive, validated and reliable HPV vaccination attitudes and beliefs scale among parents of boys. Data were collected from Canadian parents of 9- to 16-year-old boys using an online questionnaire completed in 2 waves with a 7-month interval. Based on existing vaccination attitudes scales, a set of 61 attitude and belief items were developed. Exploratory and confirmatory factor analyses were conducted. Internal consistency was evaluated with Cronbach's α and stability over time with intraclass correlations. The HPV Attitudes and Beliefs Scale (HABS) was informed by 3117 responses at time 1 and 1427 at time 2. The HABS contains 46 items organized in 9 factors: Benefits (10 items), Threat (3 items), Influence (8 items), Harms (6 items), Risk (3 items), Affordability (3 items), Communication (5 items), Accessibility (4 items), and General Vaccination Attitudes (4 items). Model fit at time 2 were: χ/df = 3.13, standardized root mean square residual = 0.056, root mean square error approximation (confidence interval) = 0.039 (0.037-0.04), comparative fit index = 0.962 and Tucker-Lewis index = 0.957. Cronbach's αs were greater than 0.8 and intraclass correlations of factors were greater than 0.6. The HABS is the first psychometrically-tested scale of HPV attitude and beliefs among parents of boys available for use in English and French. Further testing among parents of girls and young adults and assessing predictive validity are warranted.

  5. Parameter identification of JONSWAP spectrum acquired by airborne LIDAR

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Pei, Hailong; Xu, Chengzhong

    2017-12-01

    In this study, we developed the first linear Joint North Sea Wave Project (JONSWAP) spectrum (JS), which involves a transformation from the JS solution to the natural logarithmic scale. This transformation is convenient for defining the least squares function in terms of the scale and shape parameters. We identified these two wind-dependent parameters to better understand the wind effect on surface waves. Due to its efficiency and high-resolution, we employed the airborne Light Detection and Ranging (LIDAR) system for our measurements. Due to the lack of actual data, we simulated ocean waves in the MATLAB environment, which can be easily translated into industrial programming language. We utilized the Longuet-Higgin (LH) random-phase method to generate the time series of wave records and used the fast Fourier transform (FFT) technique to compute the power spectra density. After validating these procedures, we identified the JS parameters by minimizing the mean-square error of the target spectrum to that of the estimated spectrum obtained by FFT. We determined that the estimation error is relative to the amount of available wave record data. Finally, we found the inverse computation of wind factors (wind speed and wind fetch length) to be robust and sufficiently precise for wave forecasting.

  6. Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine.

    PubMed

    Okafor, Nnaemeka; Payne, Velma L; Chathampally, Yashwant; Miller, Sara; Doshi, Pratik; Singh, Hardeep

    2016-04-01

    Diagnostic errors are common in the emergency department (ED), but few studies have comprehensively evaluated their types and origins. We analysed incidents reported by ED physicians to determine disease conditions, contributory factors and patient harm associated with ED-related diagnostic errors. Between 1 March 2009 and 31 December 2013, ED physicians reported 509 incidents using a department-specific voluntary incident-reporting system that we implemented at two large academic hospital-affiliated EDs. For this study, we analysed 209 incidents related to diagnosis. A quality assurance team led by an ED physician champion reviewed each incident and interviewed physicians when necessary to confirm the presence/absence of diagnostic error and to determine the contributory factors. We generated descriptive statistics quantifying disease conditions involved, contributory factors and patient harm from errors. Among the 209 incidents, we identified 214 diagnostic errors associated with 65 unique diseases/conditions, including sepsis (9.6%), acute coronary syndrome (9.1%), fractures (8.6%) and vascular injuries (8.6%). Contributory factors included cognitive (n=317), system related (n=192) and non-remedial (n=106). Cognitive factors included faulty information verification (41.3%) and faulty information processing (30.6%) whereas system factors included high workload (34.4%) and inefficient ED processes (40.1%). Non-remediable factors included atypical presentation (31.3%) and the patients' inability to provide a history (31.3%). Most errors (75%) involved multiple factors. Major harm was associated with 34/209 (16.3%) of reported incidents. Most diagnostic errors in ED appeared to relate to common disease conditions. While sustaining diagnostic error reporting programmes might be challenging, our analysis reveals the potential value of such systems in identifying targets for improving patient safety in the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. Description of a Quality Assurance Process for a Surface Wind Database in Eastern Canada

    NASA Astrophysics Data System (ADS)

    Lucio-Eceiza, E. E.; Gonzalez-Rouco, F. J.; Navarro, J.; Beltrami, H.; García-Bustamante, E.; Hidalgo; Jiménez, P. A.

    2011-12-01

    Meteorological data of good quality are important for understanding both global and regional climates. The data are subject to different types of measurement errors that can be roughly classified into three groups: random, systematic and rough errors. Random errors are unavoidable and inherent to the very nature of the measurements as instrumental responses to real physical phenomena, as they are an approximate representation of the reality. Systematic errors are produced by instrumental scale shifts and drifts or by some more or less persistent factors that are not taken into account (changes in the sensor, recalibrations or location displacements). Rough errors are associated with sensor malfunction or mismanagement arising during data processing, transmission, reception or storage. It is essential to develop procedures that allow to identify, and correct if possible, the errors in observed series, in order to improve the quality of the data sets and reach solid conclusions in the studies. This work summarizes the evaluation made to date of the quality assurance process of wind speed and direction data acquired over a wide area in Eastern Canada (including the provinces of Quebec, Prince Edward Island, New Brunswick, Nova Scotia, and Newfoundland and Labrador), a region of the adjacent maritime areas and a region of the north-eastern U.S. (Maine, New Hampshire, Massachusetts, New York and Vermont). The data set consists of 527 stations, it spans the period 1940-2009 and has been compiled from three different sources: a set of 344 land sites obtained from Environment Canada (1940-2009), a subset of 40 buoys distributed over the East Coast and the Canadian Great Lakes (1988-2008) provided by Fisheries and Oceans, and a subset of 143 land sites combining both eastern Canada and north-eastern U.S. provided by the National Center of Atmospheric Research (1975-2007). The data have been compiled and subsequently a set of quality assurance techniques have been applied to explore the detection and later treatment of errors within measurements. These techniques involve, among others, detection of manipulation errors, limit checks to avoid unrealistic records and temporal consistency checks to suppress abnormally low/high variations. There are other issues specifically related to the heterogeneous nature of this data set such as unit-conversion and changes in recording times or direction resolution over time. Ensuring the quality of wind observations is essential for the later analysis that will focus in exploring the wind field behaviour at the regional scale, with a special interest over the area of Nova Scotia. The wind behaviour will be examined attending to the specific features of the regional topography and to the influence of changes in the large scale atmospheric circulation. Subsequent steps will involve a simulation of the wind field with high spatial resolution using a mesoscale model (such as WRF) and its validation with the observational data set presented herein.

  8. Searching for the Final Answer: Factors Contributing to Medication Administration Errors.

    ERIC Educational Resources Information Center

    Pape, Tess M.

    2001-01-01

    Causal factors contributing to errors in medication administration should be thoroughly investigated, focusing on systems rather than individual nurses. Unless systemic causes are addressed, many errors will go unreported for fear of reprisal. (Contains 42 references.) (SK)

  9. Complexity perplexity: a systematic review to describe the measurement of medication regimen complexity.

    PubMed

    Paquin, Allison M; Zimmerman, Kristin M; Kostas, Tia R; Pelletier, Lindsey; Hwang, Angela; Simone, Mark; Skarf, Lara M; Rudolph, James L

    2013-11-01

    Complex medication regimens are error prone and challenging for patients, which may impact medication adherence and safety. No universal method to assess the complexity of medication regimens (CMRx) exists. The authors aim to review literature for CMRx measurements to establish consistencies and, secondarily, describe CMRx impact on healthcare outcomes. A search of EMBASE and PubMed for studies analyzing at least two medications and complexity components, among those self-managing medications, was conducted. Out of 1204 abstracts, 38 studies were included in the final sample. The majority (74%) of studies used one of five validated CMRx scales; their components and scoring were compared. Universal CMRx assessment is needed to identify and reduce complex regimens, and, thus, improve safety. The authors highlight commonalities among five scales to help build consensus. Common components (i.e., regimen factors) included dosing frequency, units per dose, and non-oral routes. Elements (e.g., twice daily) of these components (e.g., dosing frequency) and scoring varied. Patient-specific factors (e.g., dexterity, cognition) were not addressed, which is a shortcoming of current scales and a challenge for future scales. As CMRx has important outcomes, notably adherence and healthcare utilization, a standardized tool has potential for far-reaching clinical, research, and patient-safety impact.

  10. MMPI-2 Symptom Validity (FBS) Scale: psychometric characteristics and limitations in a Veterans Affairs neuropsychological setting.

    PubMed

    Gass, Carlton S; Odland, Anthony P

    2014-01-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Symptom Validity (Fake Bad Scale [FBS]) Scale is widely used to assist in determining noncredible symptom reporting, despite a paucity of detailed research regarding its itemmetric characteristics. Originally designed for use in civil litigation, the FBS is often used in a variety of clinical settings. The present study explored its fundamental psychometric characteristics in a sample of 303 patients who were consecutively referred for a comprehensive examination in a Veterans Affairs (VA) neuropsychology clinic. FBS internal consistency (reliability) was .77. Its underlying factor structure consisted of three unitary dimensions (Tiredness/Distractibility, Stomach/Head Discomfort, and Claimed Virtue of Self/Others) accounting for 28.5% of the total variance. The FBS's internal structure showed factoral discordance, as Claimed Virtue was negatively related to most of the FBS and to its somatic complaint components. Scores on this 12-item FBS component reflected a denial of socially undesirable attitudes and behaviors (Antisocial Practices Scale) that is commonly expressed by the 1,138 males in the MMPI-2 normative sample. These 12 items significantly reduced FBS reliability, introducing systematic error variance. In this VA neuropsychological referral setting, scores on the FBS have ambiguous meaning because of its structural discordance.

  11. The design and scale-up of spray dried particle delivery systems.

    PubMed

    Al-Khattawi, Ali; Bayly, Andrew; Phillips, Andrew; Wilson, David

    2018-01-01

    The rising demand for pharmaceutical particles with tailored physicochemical properties has opened new markets for spray drying especially for solubility enhancement, improving inhalation medicines and stabilization of biopharmaceuticals. Despite this, the spray drying literature is scattered and often does not address the principles underpinning robust development of pharmaceuticals. It is therefore necessary to present clearer picture of the field and highlight the factors influencing particle design and scale-up. Areas covered: The review presents a systematic analysis of the trends in development of particle delivery systems using spray drying. This is followed by exploring the mechanisms governing particle formation in the process stages. Particle design factors including those of equipment configurations and feed/process attributes were highlighted. Finally, the review summarises the current industrial approaches for upscaling pharmaceutical spray drying. Expert opinion: Spray drying provides the ability to design particles of the desired functionality. This greatly benefits the pharmaceutical sector especially as product specifications are becoming more encompassing and exacting. One of the biggest barriers to product translation remains one of scale-up/scale-down. A shift from trial and error approaches to model-based particle design helps to enhance control over product properties. To this end, process innovations and advanced manufacturing technologies are particularly welcomed.

  12. Clinimetric properties of the Nepali version of the Pain Catastrophizing Scale in individuals with chronic pain

    PubMed Central

    Thibault, Pascal; Abbott, J Haxby; Jensen, Mark P

    2018-01-01

    Background Pain catastrophizing is an exaggerated negative cognitive response related to pain. It is commonly assessed using the Pain Catastrophizing Scale (PCS). Translation and validation of the scale in a new language would facilitate cross-cultural comparisons of the role that pain catastrophizing plays in patient function. Purpose The aim of this study was to translate and culturally adapt the PCS into Nepali (Nepali version of PCS [PCS-NP]) and evaluate its clinimetric properties. Methods We translated, cross-culturally adapted, and performed an exploratory factor analysis (EFA) of the PCS-NP in a sample of adults with chronic pain (N=143). We then confirmed the resulting factor model in a separate sample (N=272) and compared this model with 1-, 2-, and 3-factor models previously identified using confirmatory factor analyses (CFAs). We also computed internal consistencies, test–retest reliabilities, standard error of measurement (SEM), minimal detectable change (MDC), and limits of agreement with 95% confidence interval (LOA95%) of the PCS-NP scales. Concurrent validity with measures of depression, anxiety, and pain intensity was assessed by computing Pearson’s correlation coefficients. Results The PCS-NP was comprehensible and culturally acceptable. We extracted a two-factor solution using EFA and confirmed this model using CFAs in the second sample. Adequate fit was also found for a one-factor model and different two- and three-factor models based on prior studies. The PCS-NP scores evidenced excellent reliability and temporal stability, and demonstrated validity via moderate-to-strong associations with measures of depression, anxiety, and pain intensity. The SEM and MDC for the PCS-NP total score were 2.52 and 7.86, respectively (range of PCS scores 0–52). LOA95% was between −15.17 and +16.02 for the total PCS-NP scores. Conclusion The PCS-NP is a valid and reliable instrument to assess pain catastrophizing in Nepalese individuals with chronic pain. PMID:29430196

  13. Reliability and validity of the Turkish version of the situational self-efficacy scale for fruit and vegetable consumption in adolescents.

    PubMed

    Kadioglu, Hasibe; Erol, Saime; Ergun, Ayse

    2015-01-01

    The purpose of this research was to examine the psychometric properties of the Turkish version of the situational self-efficacy scale for vegetable and fruit consumption in adolescents. This was a methodological study. The study was conducted in four public secondary schools in Istanbul, Turkey. Subjects were 1586 adolescents. Content and construct validity were assessed to test the validity of the scale. The reliability was assessed in terms of internal consistency and test-retest reliability. For confirmatory factor analysis, χ(2) statistics plus other fit indices were used, including the goodness-of-fit index, the adjusted goodness-of-fit index, the nonnormed fit index, the comparative fit index, the standardized root mean residual, and the root mean square error of approximation. Pearson's correlation was used for test-retest reliability and item total correlation. The internal consistency was assessed by using Cronbach α. Confirmatory factor analysis strongly supported the three-component structure representing positive social situations (α = .81), negative effect situations (α = .93), and difficult situations (α = .78). Psychometric analyses of the Turkish version of the situational self-efficacy scale indicate high reliability and good content and construct validity. Researchers and health professionals will find it useful to employ the Turkish situational self-efficacy scale in evaluating situational self-efficacy for fruit and vegetable consumption in Turkish adolescents.

  14. Factors correlated with traffic accidents as a basis for evaluating Advanced Driver Assistance Systems.

    PubMed

    Staubach, Maria

    2009-09-01

    This study aims to identify factors which influence and cause errors in traffic accidents and to use these as a basis for information to guide the application and design of driver assistance systems. A total of 474 accidents were examined in depth for this study by means of a psychological survey, data from accident reports, and technical reconstruction information. An error analysis was subsequently carried out, taking into account the driver, environment, and vehicle sub-systems. Results showed that all accidents were influenced by errors as a consequence of distraction and reduced activity. For crossroad accidents, there were further errors resulting from sight obstruction, masked stimuli, focus errors, and law infringements. Lane departure crashes were additionally caused by errors as a result of masked stimuli, law infringements, expectation errors as well as objective and action slips, while same direction accidents occurred additionally because of focus errors, expectation errors, and objective and action slips. Most accidents were influenced by multiple factors. There is a safety potential for Advanced Driver Assistance Systems (ADAS), which support the driver in information assimilation and help to avoid distraction and reduced activity. The design of the ADAS is dependent on the specific influencing factors of the accident type.

  15. A Gridded Daily Min/Max Temperature Dataset With 0.1° Resolution for the Yangtze River Valley and its Error Estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Qiufen; Hu, Jianglin

    2013-05-01

    The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.

  16. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less

  18. Distributed database kriging for adaptive sampling (D²KAS)

    DOE PAGES

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...

    2015-03-18

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less

  19. Photograph-based ergonomic evaluations using the Rapid Office Strain Assessment (ROSA).

    PubMed

    Liebregts, J; Sonne, M; Potvin, J R

    2016-01-01

    The Rapid Office Strain Assessment (ROSA) was developed to assess musculoskeletal disorder (MSD) risk factors for computer workstations. This study examined the validity and reliability of remotely conducted, photo-based assessments using ROSA. Twenty-three office workstations were assessed on-site by an ergonomist, and 5 photos were obtained. Photo-based assessments were conducted by three ergonomists. The sensitivity and specificity of the photo-based assessors' ability to correctly classify workstations was 79% and 55%, respectively. The moderate specificity associated with false positive errors committed by the assessors could lead to unnecessary costs to the employer. Error between on-site and photo-based final scores was a considerable ∼2 points on the 10-point ROSA scale (RMSE = 2.3), with a moderate relationship (ρ = 0.33). Interrater reliability ranged from fairly good to excellent (ICC = 0.667-0.856) and was comparable to previous results. Sources of error include the parallax effect, poor estimations of small joint (e.g. hand/wrist) angles, and boundary errors in postural binning. While this method demonstrated potential validity, further improvements should be made with respect to photo-collection and other protocols for remotely-based ROSA assessments. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  20. High-accuracy absolute rotation rate measurements with a large ring laser gyro: establishing the scale factor.

    PubMed

    Hurst, Robert B; Mayerbacher, Marinus; Gebauer, Andre; Schreiber, K Ulrich; Wells, Jon-Paul R

    2017-02-01

    Large ring lasers have exceeded the performance of navigational gyroscopes by several orders of magnitude and have become useful tools for geodesy. In order to apply them to tests in fundamental physics, remaining systematic errors have to be significantly reduced. We derive a modified expression for the Sagnac frequency of a square ring laser gyro under Earth rotation. The modifications include corrections for dispersion (of both the gain medium and the mirrors), for the Goos-Hänchen effect in the mirrors, and for refractive index of the gas filling the cavity. The corrections were measured and calculated for the 16  m2 Grossring laser located at the Geodetic Observatory Wettzell. The optical frequency and the free spectral range of this laser were measured, allowing unique determination of the longitudinal mode number, and measurement of the dispersion. Ultimately we find that the absolute scale factor of the gyroscope can be estimated to an accuracy of approximately 1 part in 108.

  1. Performance Evaluation of Wearable Sensor Systems: A Case Study in Moderate-Scale Deployment in Hospital Environment.

    PubMed

    Sun, Wen; Ge, Yu; Zhang, Zhiqiang; Wong, Wai-Choong

    2015-09-25

    A wearable sensor system enables continuous and remote health monitoring and is widely considered as the next generation of healthcare technology. The performance, the packet error rate (PER) in particular, of a wearable sensor system may deteriorate due to a number of factors, particularly the interference from the other wearable sensor systems in the vicinity. We systematically evaluate the performance of the wearable sensor system in terms of PER in the presence of such interference in this paper. The factors that affect the performance of the wearable sensor system, such as density, traffic load, and transmission power in a realistic moderate-scale deployment case in hospital are all considered. Simulation results show that with 20% duty cycle, only 68.5% of data transmission can achieve the targeted reliability requirement (PER is less than 0.05) even in the off-peak period in hospital. We then suggest some interference mitigation schemes based on the performance evaluation results in the case study.

  2. Attitudes Toward Seeking Professional Psychological Help: Factor Structure and Socio-Demographic Predictors

    PubMed Central

    Picco, Louisa; Abdin, Edimanysah; Chong, Siow Ann; Pang, Shirlene; Shafie, Saleha; Chua, Boon Yiang; Vaingankar, Janhavi A.; Ong, Lue Ping; Tay, Jenny; Subramaniam, Mythily

    2016-01-01

    Attitudes toward seeking professional psychological help (ATSPPH) are complex. Help seeking preferences are influenced by various attitudinal and socio-demographic factors and can often result in unmet needs, treatment gaps, and delays in help-seeking. The aims of the current study were to explore the factor structure of the ATSPPH short form (-SF) scale and determine whether any significant socio-demographic differences exist in terms of help-seeking attitudes. Data were extracted from a population-based survey conducted among Singapore residents aged 18–65 years. Respondents provided socio-demographic information and were administered the ATSPPH-SF. Weighted mean and standard error of the mean were calculated for continuous variables, and frequencies and percentages for categorical variables. Confirmatory factor analysis and exploratory factor analysis were performed to establish the validity of the factor structure of the ATSPPH-SF scale. Multivariable linear regressions were conducted to examine predictors of each of the ATSPPH-SF factors. The factor analysis revealed that the ATSPPH-SF formed three distinct dimensions: “Openness to seeking professional help,” “Value in seeking professional help,” and “Preference to cope on one's own.” Multiple linear regression analyses showed that age, ethnicity, marital status, education, and income were significantly associated with the ATSPPH-SF factors. Population subgroups that were less open to or saw less value in seeking psychological help should be targeted via culturally appropriate education campaigns and tailored and supportive interventions. PMID:27199794

  3. Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.

    NASA Astrophysics Data System (ADS)

    Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.

    2004-11-01

    The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.


  4. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  5. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  6. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  7. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    PubMed

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  8. The Properties of Extragalactic Radio Jets

    NASA Astrophysics Data System (ADS)

    Finke, Justin

    2018-01-01

    I show that by assuming a standard Blandford-Konigl jet, it is possible to determine the speed (bulk Lorentz factor) and orientation (angle to the line of sight) of self-similar parsec-scale blazar jets by using four measured quantities: the core radio flux, the extended radio flux, the magnitude of the core shift between two frequencies, and the apparent jet opening angle. Once the bulk Lorentz factor and angle to the line of sight of a jet are known, it is possible to compute their Doppler factor, magnetic field, and intrinsic jet opening angle. I use data taken from the literature and marginalize over nuisance parameters associated with the electron distribution and equipartition, to compute these quantities, albeit with large errors. The results have implications for the resolution of the TeV BL Lac Doppler factor crisis and the production of jets from magnetically arrested disks.

  9. Strain gage installation and survivability on geosynthetics used in flexible pavements

    NASA Astrophysics Data System (ADS)

    Brooks, Jeremy A.

    The use of foil type strain gages on geosynthetics is poorly documented. In addition, very few individuals are versed in proper installation techniques or calibration methods. Due to the limited number of knowledgeable technicians there is no information regarding the susceptibility of theses gages to errors in installation by inexperienced installers. Also lacking in the documentation related to the use of foil type strain gages on geosynthetics is the survivability of the gages in field conditions. This research documented procedures for installation, calibration, and survivability used by the project team to instruments a full scale field installation in Marked Tree, AR. This research also addressed sensitivity to installation errors on both geotextile and geogrid. To document the process of gage installation an experienced technician, Mr. Joe Ables, formerly of the UASCE Waterways Experiment Station, was consulted. His techniques were combined with those discovered in related literature and those developed by the research team to develop processes that were adaptable to multiple gage geometries and parent geosynthetics. These processes were described and documented in a step by step manner with accompanying photographs, which should allow virtually anyone with basic electronics knowledge to install these gages properly. Calibration of the various geosynthetic / strain gage combinations was completed using wide width tensile testing on multiple samples of each material. The tensile testing process was documented and analyzed using digital photography to analyze strain on the strain gage itself. Calibration factors for each geosynthtics used in the full scale field testing were developed. In addition, the process was thoroughly documented to allow future researchers to calibrate additional strain gage and geosynthetic combinations. The sensitivity of the strain gages to installation errors was analyzed using wide width tensile testing and digital photography to determine the variability of the data collected from gages with noticeable installation errors as compared to properly installed gages. Induced errors varied based on the parent geosynthetics material, but included excessive and minimal waterproofing, gage rotation, gage shift, excessive and minimal adhesive, and excessive and minimal adhesive impregnation loads. The results of this work indicated that minor errors in geotextile gage installation that are noticeable and preventable by the experienced installer have no statistical significance on the data recorded during the life span of geotextile gages; however the lifespan of the gage may be noticeably shortened by such errors. Geogrid gage installation errors were found to cause statistically significant changes in the data recorded from improper installations. The issue of gage survivability was analyzed using small scale test sections instrumented and loaded similarly to field conditions anticipated during traditional roadway construction. Five methods of protection were tested for both geotextile and geogrid including a sand blanket, inversion, semi-hemispherical PCV sections, neoprene mats, and geosynthetic wick drain. Based on this testing neoprene mats were selected to protect geotextile installed gages, and wick drains were selected to protect geogrid installed gages. These methods resulted in survivability rates of 73% and 100% in the full scale installation respectively. This research and documentation may be used to train technicians to install and calibrate geosynthetic mounted foil type strain gages. In addition, technicians should be able to install gages in the field with a high probability of gage survivability using the protection methods recommended.

  10. Wechsler Adult Intelligence Scale-Revised Block Design broken configuration errors in nonpenetrating traumatic brain injury.

    PubMed

    Wilde, M C; Boake, C; Sherer, M

    2000-01-01

    Final broken configuration errors on the Wechsler Adult Intelligence Scale-Revised (WAIS-R; Wechsler, 1981) Block Design subtest were examined in 50 moderate and severe nonpenetrating traumatically brain injured adults. Patients were divided into left (n = 15) and right hemisphere (n = 19) groups based on a history of unilateral craniotomy for treatment of an intracranial lesion and were compared to a group with diffuse or negative brain CT scan findings and no history of neurosurgery (n = 16). The percentage of final broken configuration errors was related to injury severity, Benton Visual Form Discrimination Test (VFD; Benton, Hamsher, Varney, & Spreen, 1983) total score and the number of VFD rotation and peripheral errors. The percentage of final broken configuration errors was higher in the patients with right craniotomies than in the left or no craniotomy groups, which did not differ. Broken configuration errors did not occur more frequently on designs without an embedded grid pattern. Right craniotomy patients did not show a greater percentage of broken configuration errors on nongrid designs as compared to grid designs.

  11. TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS

    EPA Science Inventory

    Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...

  12. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  13. Effects of Shame and Guilt on Error Reporting Among Obstetric Clinicians.

    PubMed

    Zabari, Mara Lynne; Southern, Nancy L

    2018-04-17

    To understand how the experiences of shame and guilt, coupled with organizational factors, affect error reporting by obstetric clinicians. Descriptive cross-sectional. A sample of 84 obstetric clinicians from three maternity units in Washington State. In this quantitative inquiry, a variant of the Test of Self-Conscious Affect was used to measure proneness to guilt and shame. In addition, we developed questions to assess attitudes regarding concerns about damaging one's reputation if an error was reported and the choice to keep an error to oneself. Both assessments were analyzed separately and then correlated to identify relationships between constructs. Interviews were used to identify organizational factors that affect error reporting. As a group, mean scores indicated that obstetric clinicians would not choose to keep errors to themselves. However, bivariate correlations showed that proneness to shame was positively correlated to concerns about one's reputation if an error was reported, and proneness to guilt was negatively correlated with keeping errors to oneself. Interview data analysis showed that Past Experience with Responses to Errors, Management and Leadership Styles, Professional Hierarchy, and Relationships With Colleagues were influential factors in error reporting. Although obstetric clinicians want to report errors, their decisions to report are influenced by their proneness to guilt and shame and perceptions of the degree to which organizational factors facilitate or create barriers to restore their self-images. Findings underscore the influence of the organizational context on clinicians' decisions to report errors. Copyright © 2018 AWHONN, the Association of Women’s Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.

  14. Estimating effects of limiting factors with regression quantiles

    USGS Publications Warehouse

    Cade, B.S.; Terrell, J.W.; Schroeder, R.L.

    1999-01-01

    In a recent Concepts paper in Ecology, Thomson et al. emphasized that assumptions of conventional correlation and regression analyses fundamentally conflict with the ecological concept of limiting factors, and they called for new statistical procedures to address this problem. The analytical issue is that unmeasured factors may be the active limiting constraint and may induce a pattern of unequal variation in the biological response variable through an interaction with the measured factors. Consequently, changes near the maxima, rather than at the center of response distributions, are better estimates of the effects expected when the observed factor is the active limiting constraint. Regression quantiles provide estimates for linear models fit to any part of a response distribution, including near the upper bounds, and require minimal assumptions about the form of the error distribution. Regression quantiles extend the concept of one-sample quantiles to the linear model by solving an optimization problem of minimizing an asymmetric function of absolute errors. Rank-score tests for regression quantiles provide tests of hypotheses and confidence intervals for parameters in linear models with heteroscedastic errors, conditions likely to occur in models of limiting ecological relations. We used selected regression quantiles (e.g., 5th, 10th, ..., 95th) and confidence intervals to test hypotheses that parameters equal zero for estimated changes in average annual acorn biomass due to forest canopy cover of oak (Quercus spp.) and oak species diversity. Regression quantiles also were used to estimate changes in glacier lily (Erythronium grandiflorum) seedling numbers as a function of lily flower numbers, rockiness, and pocket gopher (Thomomys talpoides fossor) activity, data that motivated the query by Thomson et al. for new statistical procedures. Both example applications showed that effects of limiting factors estimated by changes in some upper regression quantile (e.g., 90-95th) were greater than if effects were estimated by changes in the means from standard linear model procedures. Estimating a range of regression quantiles (e.g., 5-95th) provides a comprehensive description of biological response patterns for exploratory and inferential analyses in observational studies of limiting factors, especially when sampling large spatial and temporal scales.

  15. Simulation of wave propagation in three-dimensional random media

    NASA Technical Reports Server (NTRS)

    Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1993-01-01

    Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.

  16. Good people who try their best can have problems: recognition of human factors and how to minimise error.

    PubMed

    Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David

    2016-01-01

    Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, J.; Whitmore, J.; Blair, N.

    2014-08-01

    This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% formore » all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.« less

  18. Measuring teacher self-report on classroom practices: Construct validity and reliability of the Classroom Strategies Scale-Teacher Form.

    PubMed

    Reddy, Linda A; Dudek, Christopher M; Fabiano, Gregory A; Peters, Stephanie

    2015-12-01

    This article presents information about the construct validity and reliability of a new teacher self-report measure of classroom instructional and behavioral practices (the Classroom Strategies Scales-Teacher Form; CSS-T). The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. Information is provided about the construct validity, internal consistency, test-retest reliability, and freedom from item-bias of the scales. Given previous investigations with the CSS Observer Form, it was hypothesized that internal consistency would be adequate and that confirmatory factor analyses (CFA) of CSS-T data from 293 classrooms would offer empirical support for the CSS-T's Total, Composite and subscales, and yield a similar factor structure to that of the CSS Observer Form. Goodness-of-fit indices of χ2/df, Root Mean Square Error of Approximation, Goodness of Fit Index, and Adjusted Goodness of Fit Index suggested satisfactory fit of proposed CFA models whereas the Comparative Fit Index did not. Internal consistency estimates of .93 and .94 were obtained for the Instructional Strategies and Behavioral Strategies Total scales respectively. Adequate test-retest reliability was found for instructional and behavioral total scales (r = .79, r = .84, percent agreement 93% and 93%). The CSS-T evidences freedom from item bias on important teacher demographics (age, educational degree, and years of teaching experience). Implications of results are discussed. (c) 2015 APA, all rights reserved).

  19. Interpretation of Errors Made by Mandarin-Speaking Children on the Preschool Language Scales--5th Edition Screening Test

    ERIC Educational Resources Information Center

    Ren, Yonggang; Rattanasone, Nan Xu; Wyver, Shirley; Hinton, Amber; Demuth, Katherine

    2016-01-01

    We investigated typical errors made by Mandarin-speaking children when measured by the Preschool Language Scales-fifth edition, Screening Test (PLS-5 Screening Test). The intention was to provide preliminary data for the development of a guideline for early childhood educators and psychologists who use the test with Mandarin-speaking children.…

  20. Assessing dangerous driving behavior during driving inattention: Psychometric adaptation and validation of the Attention-Related Driving Errors Scale in China.

    PubMed

    Qu, Weina; Ge, Yan; Zhang, Qian; Zhao, Wenguo; Zhang, Kan

    2015-07-01

    Driver inattention is a significant cause of motor vehicle collisions and incidents. The purpose of this study was to translate the Attention-Related Driving Error Scale (ARDES) into Chinese and to verify its reliability and validity. A total of 317 drivers completed the Chinese version of the ARDES, the Dula Dangerous Driving Index (DDDI), the Attention-Related Cognitive Errors Scale (ARCES) and the Mindful Attention Awareness Scale (MAAS) questionnaires. Specific sociodemographic variables and traffic violations were also measured. Psychometric results confirm that the ARDES-China has adequate psychometric properties (Cronbach's alpha=0.88) to be a useful tool for evaluating proneness to attentional errors in the Chinese driving population. First, ARDES-China scores were positively correlated with both DDDI scores and number of accidents in the prior year; in addition, ARDES-China scores were a significant predictor of dangerous driving behavior as measured by DDDI. Second, we found that ARDES-China scores were strongly correlated with ARCES scores and negatively correlated with MAAS scores. Finally, different demographic groups exhibited significant differences in ARDES scores; in particular, ARDES scores varied with years of driving experience. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Quantum error correction in crossbar architectures

    NASA Astrophysics Data System (ADS)

    Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie

    2018-07-01

    A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.

  2. Confirmation of the Factor Structure and Measurement Invariance of the Children's Scale of Hostility and Aggression: Reactive/Proactive in Clinic-Referred Children With and Without Autism Spectrum Disorder.

    PubMed

    Farmer, Cristan A; Kaat, Aaron J; Mazurek, Micah O; Lainhart, Janet E; DeWitt, Mary Beth; Cook, Edwin H; Butter, Eric M; Aman, Michael G

    2016-02-01

    The measurement of aggression in its different forms (e.g., physical and verbal) and functions (e.g., impulsive and instrumental) is given little attention in subjects with developmental disabilities (DD). In this study, we confirm the factor structure of the Children's Scale for Hostility and Aggression: Reactive/Proactive (C-SHARP) and demonstrate measurement invariance (consistent performance across clinical groups) between clinic-referred groups with and without autism spectrum disorder (ASD). We also provide evidence of the construct validity of the C-SHARP. Caregivers provided C-SHARP, Child Behavior Checklist (CBCL), and Proactive/Reactive Rating Scale (PRRS) ratings for 644 children, adolescents, and young adults 2-21 years of age. Five types of measurement invariance were evaluated within a confirmatory factor analytic framework. Associations among the C-SHARP, CBCL, and PRRS were explored. The factor structure of the C-SHARP had a good fit to the data from both groups, and strict measurement invariance between ASD and non-ASD groups was demonstrated (i.e., equivalent structure, factor loadings, item intercepts and residuals, and latent variance/covariance between groups). The C-SHARP Problem Scale was more strongly associated with CBCL Externalizing than with CBCL Internalizing, supporting its construct validity. Subjects classified with the PRRS as both Reactive and Proactive had significantly higher C-SHARP Proactive Scores than those classified as Reactive only, who were rated significantly higher than those classified by the PRRS as Neither Reactive nor Proactive. A similar pattern was observed for the C-SHARP Reactive Score. This study provided evidence of the validity of the C-SHARP through confirmation of its factor structure and its relationship with more established scales. The demonstration of measurement invariance demonstrates that differences in C-SHARP factor scores were the result of differences in the construct rather than to error or unmeasured/nuisance variables. These data suggest that the C-SHARP is useful for quantifying subtypes of aggressive behavior in children, adolescents, and young adults with DD.

  3. Uncertainty in sap flow-based transpiration due to xylem properties

    NASA Astrophysics Data System (ADS)

    Looker, N. T.; Hu, J.; Martin, J. T.; Jencso, K. G.

    2014-12-01

    Transpiration, the evaporative loss of water from plants through their stomata, is a key component of the terrestrial water balance, influencing streamflow as well as regional convective systems. From a plant physiological perspective, transpiration is both a means of avoiding destructive leaf temperatures through evaporative cooling and a consequence of water loss through stomatal uptake of carbon dioxide. Despite its hydrologic and ecological significance, transpiration remains a notoriously challenging process to measure in heterogeneous landscapes. Sap flow methods, which estimate transpiration by tracking the velocity of a heat pulse emitted into the tree sap stream, have proven effective for relating transpiration dynamics to climatic variables. To scale sap flow-based transpiration from the measured domain (often <5 cm of tree cross-sectional area) to the whole-tree level, researchers generally assume constancy of scale factors (e.g., wood thermal diffusivity (k), radial and azimuthal distributions of sap velocity, and conducting sapwood area (As)) through time, across space, and within species. For the widely used heat-ratio sap flow method (HRM), we assessed the sensitivity of transpiration estimates to uncertainty in k (a function of wood moisture content and density) and As. A sensitivity analysis informed by distributions of wood moisture content, wood density and As sampled across a gradient of water availability indicates that uncertainty in these variables can impart substantial error when scaling sap flow measurements to the whole tree. For species with variable wood properties, the application of the HRM assuming a spatially constant k or As may systematically over- or underestimate whole-tree transpiration rates, resulting in compounded error in ecosystem-scale estimates of transpiration.

  4. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  5. The Delicate Analysis of Short-Term Load Forecasting

    NASA Astrophysics Data System (ADS)

    Song, Changwei; Zheng, Yuan

    2017-05-01

    This paper proposes a new method for short-term load forecasting based on the similar day method, correlation coefficient and Fast Fourier Transform (FFT) to achieve the precision analysis of load variation from three aspects (typical day, correlation coefficient, spectral analysis) and three dimensions (time dimension, industry dimensions, the main factors influencing the load characteristic such as national policies, regional economic, holidays, electricity and so on). First, the branch algorithm one-class-SVM is adopted to selection the typical day. Second, correlation coefficient method is used to obtain the direction and strength of the linear relationship between two random variables, which can reflect the influence caused by the customer macro policy and the scale of production to the electricity price. Third, Fourier transform residual error correction model is proposed to reflect the nature of load extracting from the residual error. Finally, simulation result indicates the validity and engineering practicability of the proposed method.

  6. Aerodynamic coefficient identification package dynamic data accuracy determinations: Lessons learned

    NASA Technical Reports Server (NTRS)

    Heck, M. L.; Findlay, J. T.; Compton, H. R.

    1983-01-01

    The errors in the dynamic data output from the Aerodynamic Coefficient Identification Packages (ACIP) flown on Shuttle flights 1, 3, 4, and 5 were determined using the output from the Inertial Measurement Units (IMU). A weighted least-squares batch algorithm was empolyed. Using an averaging technique, signal detection was enhanced; this allowed improved calibration solutions. Global errors as large as 0.04 deg/sec for the ACIP gyros, 30 mg for linear accelerometers, and 0.5 deg/sec squared in the angular accelerometer channels were detected and removed with a combination is bias, scale factor, misalignment, and g-sensitive calibration constants. No attempt was made to minimize local ACIP dynamic data deviations representing sensed high-frequency vibration or instrument noise. Resulting 1sigma calibrated ACIP global accuracies were within 0.003 eg/sec, 1.0 mg, and 0.05 deg/sec squared for the gyros, linear accelerometers, and angular accelerometers, respectively.

  7. Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations

    NASA Technical Reports Server (NTRS)

    Perigaud, Claire; Zlotnicki, Victor

    1992-01-01

    To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.

  8. Using Scaling to Understand, Model and Predict Global Scale Anthropogenic and Natural Climate Change

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.; del Rio Amador, L.

    2014-12-01

    The atmosphere is variable over twenty orders of magnitude in time (≈10-3 to 1017 s) and almost all of the variance is in the spectral "background" which we show can be divided into five scaling regimes: weather, macroweather, climate, macroclimate and megaclimate. We illustrate this with instrumental and paleo data. Based the signs of the fluctuation exponent H, we argue that while the weather is "what you get" (H>0: fluctuations increasing with scale), that it is macroweather (H<0: fluctuations decreasing with scale) - not climate - "that you expect". The conventional framework that treats the background as close to white noise and focuses on quasi-periodic variability assumes a spectrum that is in error by a factor of a quadrillion (≈ 1015). Using this scaling framework, we can quantify the natural variability, distinguish it from anthropogenic variability, test various statistical hypotheses and make stochastic climate forecasts. For example, we estimate the probability that the warming is simply a giant century long natural fluctuation is less than 1%, most likely less than 0.1% and estimate return periods for natural warming events of different strengths and durations, including the slow down ("pause") in the warming since 1998. The return period for the pause was found to be 20-50 years i.e. not very unusual; however it immediately follows a 6 year "pre-pause" warming event of almost the same magnitude with a similar return period (30 - 40 years). To improve on these unconditional estimates, we can use scaling models to exploit the long range memory of the climate process to make accurate stochastic forecasts of the climate including the pause. We illustrate stochastic forecasts on monthly and annual scale series of global and northern hemisphere surface temperatures. We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow: for example, using hindcasts we find that at 10 year forecast horizons we can still explain ≈ 15% of the anomaly variance. These scaling hindcasts have comparable - or smaller - RMS errors than existing GCM's. We discuss how these be further improved by going beyond time series forecasts to space-time.

  9. Psychometric Properties of the Procrastination Assessment Scale-Student (PASS) in a Student Sample of Sabzevar University of Medical Sciences

    PubMed Central

    Mortazavi, Forough; Mortazavi, Saideh S.; Khosrorad, Razieh

    2015-01-01

    Background: Procrastination is a common behavior which affects different aspects of life. The procrastination assessment scale-student (PASS) evaluates academic procrastination apropos its frequency and reasons. Objectives: The aims of the present study were to translate, culturally adapt, and validate the Farsi version of the PASS in a sample of Iranian medical students. Patients and Methods: In this cross-sectional study, the PASS was translated into Farsi through the forward-backward method, and its content validity was thereafter assessed by a panel of 10 experts. The Farsi version of the PASS was subsequently distributed among 423 medical students. The internal reliability of the PASS was assessed using Cronbach’s alpha. An exploratory factor analysis (EFA) was conducted on 18 items and then 28 items of the scale to find new models. The construct validity of the scale was assessed using both EFA and confirmatory factor analysis. The predictive validity of the scale was evaluated by calculating the correlation between the academic procrastination scores and the students’ average scores in the previous semester. Results: The corresponding reliability of the first and second parts of the scale was 0.781 and 0.861. An EFA on 18 items of the scale found 4 factors which jointly explained 53.2% of variances: The model was marginally acceptable (root mean square error of approximation [RMSEA] =0.098, standardized root mean square residual [SRMR] =0.076, χ2 /df =4.8, comparative fit index [CFI] =0.83). An EFA on 28 items of the scale found 4 factors which altogether explained 42.62% of variances: The model was acceptable (RMSEA =0.07, SRMR =0.07, χ2/df =2.8, incremental fit index =0.90, CFI =0.90). There was a negative correlation between the procrastination scores and the students’ average scores (r = -0.131, P =0.02). Conclusions: The Farsi version of the PASS is a valid and reliable tool to measure academic procrastination in Iranian undergraduate medical students. PMID:26473078

  10. Determination of Membrane-Insertion Free Energies by Molecular Dynamics Simulations

    PubMed Central

    Gumbart, James; Roux, Benoît

    2012-01-01

    The accurate prediction of membrane-insertion probability for arbitrary protein sequences is a critical challenge to identifying membrane proteins and determining their folded structures. Although algorithms based on sequence statistics have had moderate success, a complete understanding of the energetic factors that drive the insertion of membrane proteins is essential to thoroughly meeting this challenge. In the last few years, numerous attempts to define a free-energy scale for amino-acid insertion have been made, yet disagreement between most experimental and theoretical scales persists. However, for a recently resolved water-to-bilayer scale, it is found that molecular dynamics simulations that carefully mimic the conditions of the experiment can reproduce experimental free energies, even when using the same force field as previous computational studies that were cited as evidence of this disagreement. Therefore, it is suggested that experimental and simulation-based scales can both be accurate and that discrepancies stem from disparities in the microscopic processes being considered rather than methodological errors. Furthermore, these disparities make the development of a single universally applicable membrane-insertion free energy scale difficult. PMID:22385850

  11. Evaluation of drug administration errors in a teaching hospital

    PubMed Central

    2012-01-01

    Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837

  12. Evaluation of drug administration errors in a teaching hospital.

    PubMed

    Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre

    2012-03-12

    Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  13. Factor structure and construct validity of the Generalized Anxiety Disorder 7-item (GAD-7) among Portuguese college students.

    PubMed

    Bártolo, Ana; Monteiro, Sara; Pereira, Anabela

    2017-09-28

    : The Generalized Anxiety Disorder 7-item (GAD-7) scale has been presented as a reliable and valid measure to assess generalized anxiety symptoms in several clinical settings and among the general population. However, some researches did not support the original one-dimensional structure of the GAD-7 tool. Our main aim was to examine the factor structure of GAD-7 comparing the one-factor model fit with a two-factor model (3 somatic nature symptoms and 4 cognitive-emotional nature symptoms) in a sample of college students. This validation study with data collected cross-sectionally included 1,031 Portuguese college students attending courses in the six schools of the Polytechnic Institute of Coimbra, Coimbra, Portugal. Measures included the GAD-7, Hospital Anxiety and Depression Scale (HADS) and the University Student Risk Behaviors Questionnaire. Confirmatory factor analysis (CFA) procedures confirmed that neither factor structure was well fitting. Thus, a modified single factor model allowing the error terms of items associated with relaxing difficulties and irritability to covary was an appropriate solution. Additionally, this factor structure revealed configural and metric invariance across gender. A good convergent validity was found by correlating global anxiety and depression. However, this measure showed a weak association with consumption behaviors. Our results are relevant to clinical practice, since the comprehensive approach to GAD-7 contributes to knowing generalized anxiety symptoms trajectory and their correlates within the university setting.

  14. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  15. Optimizing dynamic downscaling in one-way nesting using a regional ocean model

    NASA Astrophysics Data System (ADS)

    Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun

    2016-10-01

    Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.

  16. Personal hygiene among military personnel: developing and testing a self-administered scale.

    PubMed

    Saffari, Mohsen; Koenig, Harold G; Pakpour, Amir H; Sanaeinasab, Hormoz; Jahan, Hojat Rshidi; Sehlo, Mohammad Gamal

    2014-03-01

    Good personal hygiene (PH) behavior is recommended to prevent contagious diseases, and members of military forces may be at high risk for contracting contagious diseases. The aim of this study was to develop and test a new questionnaire on PH for soldiers. Participants were all male and from different military settings throughout Iran. Using a five-stage guideline, a panel of experts in the Persian language (Farsi) developed a 21-item self-administered questionnaire. Face and content validity of the first-draft items were assessed. The questionnaire was then translated and subsequently back-translated into English, and both the Farsi and English versions were tested in pilot studies. The consistency and stability of the questionnaire were tested using Cronbach's alpha and the test-retest strategy. The final scale was administered to a sample of 502 military personnel. Explanatory and confirmatory factor analyses evaluated the structure of the scale. Both the convergent and discriminative validity of the scale were also determined. Cronbach's alpha coefficients were >0.85. Principal component analysis demonstrated a uni-dimensional structure that explained 59 % of the variance in PH behaviors. Confirmatory factor analysis indicated a good fit (goodness-of-fit index = 0.902; comparative fitness index = 0.923; root mean square error of approximation = 0.0085). The results show that this new PH scale has solid psychometric properties for testing PH behaviors among an Iranian sample of military personnel. We conclude that this scale can be a useful tool for assessing PH behaviors in military personnel. Further research is needed to determine the scale's value in other countries and cultures.

  17. Reducing Errors in Satellite Simulated Views of Clouds with an Improved Parameterization of Unresolved Scales

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Marchand, R.; Ackerman, T. P.

    2016-12-01

    Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A

  18. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  19. On the assimilation set-up of ASCAT soil moisture data for improving streamflow catchment simulation

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Tarpanelli, Angelica; Brocca, Luca; Casalí, Javier

    2018-01-01

    Assimilation of remotely sensed surface soil moisture (SSM) data into hydrological catchment models has been identified as a means to improve streamflow simulations, but reported results vary markedly depending on the particular model, catchment and assimilation procedure used. In this study, the influence of key aspects, such as the type of model, re-scaling technique and SSM observation error considered, were evaluated. For this aim, Advanced SCATterometer ASCAT-SSM observations were assimilated through the ensemble Kalman filter into two hydrological models of different complexity (namely MISDc and TOPLATS) run on two Mediterranean catchments of similar size (750 km2). Three different re-scaling techniques were evaluated (linear re-scaling, variance matching and cumulative distribution function matching), and SSM observation error values ranging from 0.01% to 20% were considered. Four different efficiency measures were used for evaluating the results. Increases in Nash-Sutcliffe efficiency (0.03-0.15) and efficiency indices (10-45%) were obtained, especially when linear re-scaling and observation errors within 4-6% were considered. This study found out that there is a potential to improve streamflow prediction through data assimilation of remotely sensed SSM in catchments of different characteristics and with hydrological models of different conceptualizations schemes, but for that, a careful evaluation of the observation error and re-scaling technique set-up utilized is required.

  20. Characterizing the utility of the TMPA real-time product for hydrologic predictions over global river basins across scales

    NASA Astrophysics Data System (ADS)

    Gao, H.; Zhang, S.; Nijssen, B.; Zhou, T.; Voisin, N.; Sheffield, J.; Lee, K.; Shukla, S.; Lettenmaier, D. P.

    2017-12-01

    Despite its errors and uncertainties, the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis real-time product (TMPA-RT) has been widely used for hydrological monitoring and forecasting due to its timely availability for real-time applications. To evaluate the utility of TMPA-RT in hydrologic predictions, many studies have compared modeled streamflows driven by TMPA-RT against gauge data. However, because of the limited availability of streamflow observations in data sparse regions, there is still a lack of comprehensive comparisons for TMPA-RT based hydrologic predictions at the global scale. Furthermore, it is expected that its skill is less optimal at the subbasin scale than the basin scale. In this study, we evaluate and characterize the utility of the TMPA-RT product over selected global river basins during the period of 1998 to 2015 using the TMPA research product (TMPA-RP) as a reference. The Variable Infiltration Capacity (VIC) model, which was calibrated and validated previously, is adopted to simulate streamflows driven by TMPA-RT and TMPA-RP, respectively. The objective of this study is to analyze the spatial and temporal characteristics of the hydrologic predictions by answering the following questions: (1) How do the precipitation errors associated with the TMPA-RT product transform into streamflow errors with respect to geographical and climatological characteristics? (2) How do streamflow errors vary across scales within a basin?

  1. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  2. Malingering in Toxic Exposure. Classification Accuracy of Reliable Digit Span and WAIS-III Digit Span Scaled Scores

    ERIC Educational Resources Information Center

    Greve, Kevin W.; Springer, Steven; Bianchini, Kevin J.; Black, F. William; Heinly, Matthew T.; Love, Jeffrey M.; Swift, Douglas A.; Ciota, Megan A.

    2007-01-01

    This study examined the sensitivity and false-positive error rate of reliable digit span (RDS) and the WAIS-III Digit Span (DS) scaled score in persons alleging toxic exposure and determined whether error rates differed from published rates in traumatic brain injury (TBI) and chronic pain (CP). Data were obtained from the files of 123 persons…

  3. Scaled CMOS Technology Reliability Users Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.

  4. Minimizing driver errors: examining factors leading to failed target tracking and detection.

    DOT National Transportation Integrated Search

    2013-06-01

    Driving a motor vehicle is a common practice for many individuals. Although driving becomes : repetitive and a very habitual task, errors can occur that lead to accidents. One factor that can be a : cause for such errors is a lapse in attention or a ...

  5. Development of an FAA-EUROCONTROL technique for the analysis of human error in ATM : final report.

    DOT National Transportation Integrated Search

    2002-07-01

    Human error has been identified as a dominant risk factor in safety-oriented industries such as air traffic control (ATC). However, little is known about the factors leading to human errors in current air traffic management (ATM) systems. The first s...

  6. Calibration and error analysis of metal-oxide-semiconductor field-effect transistor dosimeters for computed tomography radiation dosimetry.

    PubMed

    Trattner, Sigal; Prinsen, Peter; Wiegert, Jens; Gerland, Elazar-Lars; Shefer, Efrat; Morton, Tom; Thompson, Carla M; Yagil, Yoad; Cheng, Bin; Jambawalikar, Sachin; Al-Senan, Rani; Amurao, Maxwell; Halliburton, Sandra S; Einstein, Andrew J

    2017-12-01

    Metal-oxide-semiconductor field-effect transistors (MOSFETs) serve as a helpful tool for organ radiation dosimetry and their use has grown in computed tomography (CT). While different approaches have been used for MOSFET calibration, those using the commonly available 100 mm pencil ionization chamber have not incorporated measurements performed throughout its length, and moreover, no previous work has rigorously evaluated the multiple sources of error involved in MOSFET calibration. In this paper, we propose a new MOSFET calibration approach to translate MOSFET voltage measurements into absorbed dose from CT, based on serial measurements performed throughout the length of a 100-mm ionization chamber, and perform an analysis of the errors of MOSFET voltage measurements and four sources of error in calibration. MOSFET calibration was performed at two sites, to determine single calibration factors for tube potentials of 80, 100, and 120 kVp, using a 100-mm-long pencil ion chamber and a cylindrical computed tomography dose index (CTDI) phantom of 32 cm diameter. The dose profile along the 100-mm ion chamber axis was sampled in 5 mm intervals by nine MOSFETs in the nine holes of the CTDI phantom. Variance of the absorbed dose was modeled as a sum of the MOSFET voltage measurement variance and the calibration factor variance, the latter being comprised of three main subcomponents: ionization chamber reading variance, MOSFET-to-MOSFET variation and a contribution related to the fact that the average calibration factor of a few MOSFETs was used as an estimate for the average value of all MOSFETs. MOSFET voltage measurement error was estimated based on sets of repeated measurements. The calibration factor overall voltage measurement error was calculated from the above analysis. Calibration factors determined were close to those reported in the literature and by the manufacturer (~3 mV/mGy), ranging from 2.87 to 3.13 mV/mGy. The error σ V of a MOSFET voltage measurement was shown to be proportional to the square root of the voltage V: σV=cV where c = 0.11 mV. A main contributor to the error in the calibration factor was the ionization chamber reading error with 5% error. The usage of a single calibration factor for all MOSFETs introduced an additional error of about 5-7%, depending on the number of MOSFETs that were used to determine the single calibration factor. The expected overall error in a high-dose region (~30 mGy) was estimated to be about 8%, compared to 6% when an individual MOSFET calibration was performed. For a low-dose region (~3 mGy), these values were 13% and 12%. A MOSFET calibration method was developed using a 100-mm pencil ion chamber and a CTDI phantom, accompanied by an absorbed dose error analysis reflecting multiple sources of measurement error. When using a single calibration factor, per tube potential, for different MOSFETs, only a small error was introduced into absorbed dose determinations, thus supporting the use of a single calibration factor for experiments involving many MOSFETs, such as those required to accurately estimate radiation effective dose. © 2017 American Association of Physicists in Medicine.

  7. High-resolution inversion of OMI formaldehyde columns to quantify isoprene emission on ecosystem-relevant scales: application to the southeast US

    NASA Astrophysics Data System (ADS)

    Kaiser, Jennifer; Jacob, Daniel J.; Zhu, Lei; Travis, Katherine R.; Fisher, Jenny A.; González Abad, Gonzalo; Zhang, Lin; Zhang, Xuesong; Fried, Alan; Crounse, John D.; St. Clair, Jason M.; Wisthaler, Armin

    2018-04-01

    Isoprene emissions from vegetation have a large effect on atmospheric chemistry and air quality. Bottom-up isoprene emission inventories used in atmospheric models are based on limited vegetation information and uncertain land cover data, leading to potentially large errors. Satellite observations of atmospheric formaldehyde (HCHO), a high-yield isoprene oxidation product, provide top-down information to evaluate isoprene emission inventories through inverse analyses. Past inverse analyses have however been hampered by uncertainty in the HCHO satellite data, uncertainty in the time- and NOx-dependent yield of HCHO from isoprene oxidation, and coarse resolution of the atmospheric models used for the inversion. Here we demonstrate the ability to use HCHO satellite data from OMI in a high-resolution inversion to constrain isoprene emissions on ecosystem-relevant scales. The inversion uses the adjoint of the GEOS-Chem chemical transport model at 0.25° × 0.3125° horizontal resolution to interpret observations over the southeast US in August-September 2013. It takes advantage of concurrent NASA SEAC4RS aircraft observations of isoprene and its oxidation products including HCHO to validate the OMI HCHO data over the region, test the GEOS-Chem isoprene oxidation mechanism and NOx environment, and independently evaluate the inversion. This evaluation shows in particular that local model errors in NOx concentrations propagate to biases in inferring isoprene emissions from HCHO data. It is thus essential to correct model NOx biases, which was done here using SEAC4RS observations but can be done more generally using satellite NO2 data concurrently with HCHO. We find in our inversion that isoprene emissions from the widely used MEGAN v2.1 inventory are biased high over the southeast US by 40 % on average, although the broad-scale distributions are correct including maximum emissions in Arkansas/Louisiana and high base emission factors in the oak-covered Ozarks of southeast Missouri. A particularly large discrepancy is in the Edwards Plateau of central Texas where MEGAN v2.1 is too high by a factor of 3, possibly reflecting errors in land cover. The lower isoprene emissions inferred from our inversion, when implemented into GEOS-Chem, decrease surface ozone over the southeast US by 1-3 ppb and decrease the isoprene contribution to organic aerosol from 40 to 20 %.

  8. Predicting groundwater recharge for varying land cover and climate conditions - a global meta-study

    NASA Astrophysics Data System (ADS)

    Mohan, Chinchu; Western, Andrew W.; Wei, Yongping; Saft, Margarita

    2018-05-01

    Groundwater recharge is one of the important factors determining the groundwater development potential of an area. Even though recharge plays a key role in controlling groundwater system dynamics, much uncertainty remains regarding the relationships between groundwater recharge and its governing factors at a large scale. Therefore, this study aims to identify the most influential factors of groundwater recharge, and to develop an empirical model to estimate diffuse rainfall recharge at a global scale. Recharge estimates reported in the literature from various parts of the world (715 sites) were compiled and used in model building and testing exercises. Unlike conventional recharge estimates from water balance, this study used a multimodel inference approach and information theory to explain the relationship between groundwater recharge and influential factors, and to predict groundwater recharge at 0.5° resolution. The results show that meteorological factors (precipitation and potential evapotranspiration) and vegetation factors (land use and land cover) had the most predictive power for recharge. According to the model, long-term global average annual recharge (1981-2014) was 134 mm yr-1 with a prediction error ranging from -8 to 10 mm yr-1 for 97.2 % of cases. The recharge estimates presented in this study are unique and more reliable than the existing global groundwater recharge estimates because of the extensive validation carried out using both independent local estimates collated from the literature and national statistics from the Food and Agriculture Organization (FAO). In a water-scarce future driven by increased anthropogenic development, the results from this study will aid in making informed decisions about groundwater potential at a large scale.

  9. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  10. The Depression Anxiety Stress Scales (DASS): normative data and latent structure in a large non-clinical sample.

    PubMed

    Crawford, John R; Henry, Julie D

    2003-06-01

    To provide UK normative data for the Depression Anxiety and Stress Scale (DASS) and test its convergent, discriminant and construct validity. Cross-sectional, correlational and confirmatory factor analysis (CFA). The DASS was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,771) in terms of demographic variables. Competing models of the latent structure of the DASS were derived from theoretical and empirical sources and evaluated using confirmatory factor analysis. Correlational analysis was used to determine the influence of demographic variables on DASS scores. The convergent and discriminant validity of the measure was examined through correlating the measure with two other measures of depression and anxiety (the HADS and the sAD), and a measure of positive and negative affectivity (the PANAS). The best fitting model (CFI =.93) of the latent structure of the DASS consisted of three correlated factors corresponding to the depression, anxiety and stress scales with correlated error permitted between items comprising the DASS subscales. Demographic variables had only very modest influences on DASS scores. The reliability of the DASS was excellent, and the measure possessed adequate convergent and discriminant validity Conclusions: The DASS is a reliable and valid measure of the constructs it was intended to assess. The utility of this measure for UK clinicians is enhanced by the provision of large sample normative data.

  11. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  12. Imaging phased telescope array study

    NASA Technical Reports Server (NTRS)

    Harvey, James E.

    1989-01-01

    The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.

  13. Comparison of in-situ delay monitors for use in Adaptive Voltage Scaling

    NASA Astrophysics Data System (ADS)

    Pour Aryan, N.; Heiß, L.; Schmitt-Landsiedel, D.; Georgakos, G.; Wirnshofer, M.

    2012-09-01

    In Adaptive Voltage Scaling (AVS) the supply voltage of digital circuits is tuned according to the circuit's actual operating condition, which enables dynamic compensation to PVTA variations. By exploiting the excessive safety margins added in state-of-the-art worst-case designs considerable power saving is achieved. In our approach, the operating condition of the circuit is monitored by in-situ delay monitors. This paper presents different designs to implement the in-situ delay monitors capable of detecting late but still non-erroneous transitions, called Pre-Errors. The developed Pre-Error monitors are integrated in a 16 bit multiplier test circuit and the resulting Pre-Error AVS system is modeled by a Markov chain in order to determine the power saving potential of each Pre-Error detection approach.

  14. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.

  15. Identifying Human Factors Issues in Aircraft Maintenance Operations

    NASA Technical Reports Server (NTRS)

    Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)

    1995-01-01

    Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.

  16. Final report for CCS cross-layer reliability visioning study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Dehon, Andre; Carter, Nicj

    The geometric rate of improvement of transistor size and integrated circuit performance known as Moore's Law has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. Looking forward, increasing unpredictability threatens our ability to continue scaling integrated circuits at Moore's Law rates. As the transistors and wires that make up integrated circuits become smaller, they display both greater differences in behavior among devices designed to be identical and greater vulnerability to transient and permanent faults. Conventional design techniques expend energy to tolerate this unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. However, the rising energy costs needed to compensate for increasing unpredictability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor on integrated circuit performance and energy efficiency is a national concern. Reliability and energy consumption are both reaching key inflection points that, together, threaten to reduce or end the benefits of feature size reduction. To continue beneficial scaling, we must use a cross-layer, Jull-system-design approach to reliability. Unlike current systems, which charge every device a substantial energy tax in order to guarantee correct operation in spite of rare events, such as one high-threshold transistor in a billion or one erroneous gate evaluation in an hour of computation, cross-layer reliability schemes make reliability management a cooperative effort across the system stack, sharing information across layers so that they only expend energy on reliability when an error actually occurs. Figure 1 illustrates an example of such a system that uses a combination of information from the application and cheap architecture-level techniques to detect errors. When an error occurs, mechanisms at higher levels in the stack correct the error, efficiently delivering correct operation to the user in spite of errors at the device or circuit levels. In the realms of memory and communication, engineers have a long history of success in tolerating unpredictable effects such as fabrication variability, transient upsets, and lifetime wear using information sharing, limited redundancy, and cross-layer approaches that anticipate, accommodate, and suppress errors. Networks use a combination of hardware and software to guarantee end-toend correctness. Error-detection and correction codes use additional information to correct the most common errors, single-bit transmission errors. When errors occur that cannot be corrected by these codes, the network protocol requests re-transmission of one or more packets until the correct data is received. Similarly, computer memory systems exploit a cross-layer division of labor to achieve high performance with modest hardware. Rather than demanding that hardware alone provide the virtual memory abstraction, software page-fault and TLB-miss handlers allow a modest piece of hardware, the TLB, to handle the common-case operations on a cyc1e-by-cycle basis while infrequent misses are handled in system software. Unfortunately, mitigating logic errors is not as simple or as well researched as memory or communication systems. This lack of understanding has led to very expensive solutions. For example, triple-modular redundancy masks errors by triplicating computations in either time or area. This mitigation methods imposes a 200% increase in energy consumption for every operation, not just the uncommon failure cases. At a time when computation is rapidly becoming part of our critical civilian and military infrastructure and decreasing costsfor computation are fueling our economy and our well being, we cannot afford increasingly unreliable electronics or a stagnation in capabilities per dollar, watt, or cubic meter. If researchers are able to develop techniques that tolerate the growing unpredictability of silicon devices, Moore's Law scaling should continue until at least 2022. During this 12-year time period, transistors, which are the building blocks of electronic devices, will scale their dimensions (feature sizes) from 45nm to 4.5nm.« less

  17. Five-Year Wilkinson Microwave Anisotropy Probe Observations: Beam Maps and Window Functions

    NASA Astrophysics Data System (ADS)

    Hill, R. S.; Weiland, J. L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C. L.; Halpern, M.; Page, L.; Dunkley, J.; Gold, B.; Jarosik, N.; Kogut, A.; Limon, M.; Nolta, M. R.; Spergel, D. N.; Tucker, G. S.; Wright, E. L.

    2009-02-01

    Cosmology and other scientific results from the Wilkinson Microwave Anisotropy Probe (WMAP) mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of ~2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of ~1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of ~2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly, errors in the measured disk temperature are ~0.5%. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.

  18. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  19. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  20. Sources of Response Bias in Older Ethnic Minorities: A Case of Korean American Elderly

    PubMed Central

    Kim, Miyong T.; Ko, Jisook; Yoon, Hyunwoo; Kim, Kim B.; Jang, Yuri

    2015-01-01

    The present study was undertaken to investigate potential sources of response bias in empirical research involving older ethnic minorities and to identify prudent strategies to reduce those biases, using Korean American elderly (KAE) as an example. Data were obtained from three independent studies of KAE (N=1,297; age ≥60) in three states (Florida, New York, and Maryland) from 2000 to 2008. Two common measures, Pearlin’s Mastery Scale and the CES-D scale, were selected for a series of psychometric tests based on classical measurement theory. Survey items were analyzed in depth, using psychometric properties generated from both exploratory factor analysis and confirmatory factor analysis as well as correlational analysis. Two types of potential sources of bias were identified as the most significant contributors to increases in error variances for these psychological instruments. Error variances were most prominent when (1) items were not presented in a manner that was culturally or contextually congruent with respect to the target population and/or (2) the response anchors for items were mixed (e.g., positive vs. negative). The systemic patterns and magnitudes of the biases were also cross-validated for the three studies. The results demonstrate sources and impacts of measurement biases in studies of older ethnic minorities. The identified response biases highlight the need for re-evaluation of current measurement practices, which are based on traditional recommendations that response anchors should be mixed or that the original wording of instruments should be rigidly followed. Specifically, systematic guidelines for accommodating cultural and contextual backgrounds into instrument design are warranted. PMID:26049971

  1. Sources of Response Bias in Older Ethnic Minorities: A Case of Korean American Elderly.

    PubMed

    Kim, Miyong T; Lee, Ju-Young; Ko, Jisook; Yoon, Hyunwoo; Kim, Kim B; Jang, Yuri

    2015-09-01

    The present study was undertaken to investigate potential sources of response bias in empirical research involving older ethnic minorities and to identify prudent strategies to reduce those biases, using Korean American elderly (KAE) as an example. Data were obtained from three independent studies of KAE (N = 1,297; age ≥60) in three states (Florida, New York, and Maryland) from 2000 to 2008. Two common measures, Pearlin's Mastery Scale and the CES-D scale, were selected for a series of psychometric tests based on classical measurement theory. Survey items were analyzed in depth, using psychometric properties generated from both exploratory factor analysis and confirmatory factor analysis as well as correlational analysis. Two types of potential sources of bias were identified as the most significant contributors to increases in error variances for these psychological instruments. Error variances were most prominent when (1) items were not presented in a manner that was culturally or contextually congruent with respect to the target population and/or (2) the response anchors for items were mixed (e.g., positive vs. negative). The systemic patterns and magnitudes of the biases were also cross-validated for the three studies. The results demonstrate sources and impacts of measurement biases in studies of older ethnic minorities. The identified response biases highlight the need for re-evaluation of current measurement practices, which are based on traditional recommendations that response anchors should be mixed or that the original wording of instruments should be rigidly followed. Specifically, systematic guidelines for accommodating cultural and contextual backgrounds into instrument design are warranted.

  2. A Novel INS and Doppler Sensors Calibration Method for Long Range Underwater Vehicle Navigation

    PubMed Central

    Tang, Kanghua; Wang, Jinling; Li, Wanli; Wu, Wenqi

    2013-01-01

    Since the drifts of Inertial Navigation System (INS) solutions are inevitable and also grow over time, a Doppler Velocity Log (DVL) is used to aid the INS to restrain its error growth. Therefore, INS/DVL integration is a common approach for Autonomous Underwater Vehicle (AUV) navigation. The parameters including the scale factor of DVL and misalignments between INS and DVL are key factors which limit the accuracy of the INS/DVL integration. In this paper, a novel parameter calibration method is proposed. An iterative implementation of the method is designed to reduce the error caused by INS initial alignment. Furthermore, a simplified INS/DVL integration scheme is employed. The proposed method is evaluated with both river trial and sea trial data sets. Using 0.03°/h(1σ) ring laser gyroscopes, 5 × 10−5 g(1σ) quartz accelerometers and DVL with accuracy 0.5% V ± 0.5 cm/s, INS/DVL integrated navigation can reach an accuracy of about 1‰ of distance travelled (CEP) in a river trial and 2‰ of distance travelled (CEP) in a sea trial. PMID:24169542

  3. Error Correcting Optical Mapping Data.

    PubMed

    Mukherjee, Kingshuk; Washimkar, Darshan; Muggli, Martin D; Salmela, Leena; Boucher, Christina

    2018-05-26

    Optical mapping is a unique system that is capable of producing high-resolution, high-throughput genomic map data that gives information about the structure of a genome [21]. Recently it has been used for scaffolding contigs and assembly validation for large-scale sequencing projects, including the maize [32], goat [6], and amborella [4] genomes. However, a major impediment in the use of this data is the variety and quantity of errors in the raw optical mapping data, which are called Rmaps. The challenges associated with using Rmap data are analogous to dealing with insertions and deletions in the alignment of long reads. Moreover, they are arguably harder to tackle since the data is numerical and susceptible to inaccuracy. We develop cOMET to error correct Rmap data, which to the best of our knowledge is the only optical mapping error correction method. Our experimental results demonstrate that cOMET has high prevision and corrects 82.49% of insertion errors and 77.38% of deletion errors in Rmap data generated from the E. coli K-12 reference genome. Out of the deletion errors corrected, 98.26% are true errors. Similarly, out of the insertion errors corrected, 82.19% are true errors. It also successfully scales to large genomes, improving the quality of 78% and 99% of the Rmaps in the plum and goat genomes, respectively. Lastly, we show the utility of error correction by demonstrating how it improves the assembly of Rmap data. Error corrected Rmap data results in an assembly that is more contiguous, and covers a larger fraction of the genome.

  4. Proposed Interventions to Decrease the Frequency of Missed Test Results

    ERIC Educational Resources Information Center

    Wahls, Terry L.; Cram, Peter

    2009-01-01

    Numerous studies have identified that delays in diagnosis related to the mishandling of abnormal test results are an import contributor to diagnostic errors. Factors contributing to missed results included organizational factors, provider factors and patient-related factors. At the diagnosis error conference continuing medical education conference…

  5. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  6. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  7. Investigation on synchronization of the offset printing process for fine patterning and precision overlay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Dongwoo; Lee, Eonseok; Kim, Hyunchang

    2014-06-21

    Offset printing processes are promising candidates for producing printed electronics due to their capacity for fine patterning and suitability for mass production. To print high-resolution patterns with good overlay using offset printing, the velocities of two contact surfaces, which ink is transferred between, should be synchronized perfectly. However, an exact velocity of the contact surfaces is unknown due to several imperfections, including tolerances, blanket swelling, and velocity ripple, which prevents the system from being operated in the synchronized condition. In this paper, a novel method of measurement based on the sticking model of friction force was proposed to determine themore » best synchronized condition, i.e., the condition in which the rate of synchronization error is minimized. It was verified by experiment that the friction force can accurately represent the rate of synchronization error. Based on the measurement results of the synchronization error, the allowable margin of synchronization error when printing high-resolution patterns was investigated experimentally using reverse offset printing. There is a region where the patterning performance is unchanged even though the synchronization error is varied, and this may be viewed as indirect evidence that printability performance is secured when there is no slip at the contact interface. To understand what happens at the contact surfaces during ink transfer, the deformation model of the blanket's surface was developed. The model estimates how much deformation on the blanket's surface can be borne by the synchronization error when there is no slip at the contact interface. In addition, the model shows that the synchronization error results in scale variation in the machine direction (MD), which means that the printing registration in the MD can be adjusted actively by controlling the synchronization if there is a sufficient margin of synchronization error to guarantee printability. The effect of synchronization on the printing registration was verified experimentally using gravure offset printing. The variations in synchronization result in the differences in the MD scale, and the measured MD scale matches exactly with the modeled MD scale.« less

  8. Development and Validation of the Body Concealment Scale for Scleroderma.

    PubMed

    Jewett, Lisa R; Malcarne, Vanessa L; Kwakkenbos, Linda; Harcourt, Diana; Rumsey, Nichola; Körner, Annett; Steele, Russell J; Hudson, Marie; Baron, Murray; Haythornthwaite, Jennifer A; Heinberg, Leslie; Wigley, Fredrick M; Thombs, Brett D

    2016-08-01

    Body concealment is a component of social avoidance among people with visible differences from disfiguring conditions, including systemic sclerosis (SSc). The study objective was to develop a measure of body concealment related to avoidance behaviors in SSc. Initial items for the Body Concealment Scale for Scleroderma (BCSS) were selected using item analysis in a development sample of 93 American SSc patients. The factor structure of the BCSS was evaluated in 742 Canadian patients with single-factor, 2-factor, and bifactor confirmatory factor analysis models. Convergent and divergent validity were assessed by comparing the BCSS total score with the Brief-Satisfaction with Appearance Scale (Brief-SWAP) and measures of depressive symptoms and pain. A 2-factor model (Comparative Fit Index [CFI] 0.99, Tucker-Lewis Index [TLI] 0.98, Root Mean Square Error of Approximation [RMSEA] 0.08) fit substantially better than a 1-factor model (CFI 0.95, TLI 0.94, RMSEA 0.15) for the 9-item BCSS, but the Concealment with Clothing and Concealment of Hands factors were highly correlated (α = 0.79). The bifactor model (CFI 0.99, TLI 0.99, RMSEA 0.08) also fit well. In the bifactor model, the omega coefficient was high for the general factor (ω = 0.80), but low for the Concealment with Clothing (ω = 0.01) and Concealment of Hands (ω = 0.33) factors. The BCSS total score correlated more strongly with the Brief-SWAP Social Discomfort (r = 0.59) and Dissatisfaction with Appearance (r = 0.53) subscales than with measures of depressive symptoms and pain. The BCSS sum score is a valid indicator of body concealment in SSc that extends the concepts of body concealment and avoidance beyond the realms of body shape and weight to concerns of individuals with visible differences from SSc. © 2016, American College of Rheumatology.

  9. MUSIC: MUlti-Scale Initial Conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  10. Towards national-scale greenhouse gas emissions evaluation with robust uncertainty estimates

    NASA Astrophysics Data System (ADS)

    Rigby, Matthew; Swallow, Ben; Lunt, Mark; Manning, Alistair; Ganesan, Anita; Stavert, Ann; Stanley, Kieran; O'Doherty, Simon

    2016-04-01

    Through the Deriving Emissions related to Climate Change (DECC) network and the Greenhouse gAs Uk and Global Emissions (GAUGE) programme, the UK's greenhouse gases are now monitored by instruments mounted on telecommunications towers and churches, on a ferry that performs regular transects of the North Sea, on-board a research aircraft and from space. When combined with information from high-resolution chemical transport models such as the Met Office Numerical Atmospheric dispersion Modelling Environment (NAME), these measurements are allowing us to evaluate emissions more accurately than has previously been possible. However, it has long been appreciated that current methods for quantifying fluxes using atmospheric data suffer from uncertainties, primarily relating to the chemical transport model, that have been largely ignored to date. Here, we use novel model reduction techniques for quantifying the influence of a set of potential systematic model errors on the outcome of a national-scale inversion. This new technique has been incorporated into a hierarchical Bayesian framework, which can be shown to reduce the influence of subjective choices on the outcome of inverse modelling studies. Using estimates of the UK's methane emissions derived from DECC and GAUGE tall-tower measurements as a case study, we will show that such model systematic errors have the potential to significantly increase the uncertainty on national-scale emissions estimates. Therefore, we conclude that these factors must be incorporated in national emissions evaluation efforts, if they are to be credible.

  11. Electronic couplings for molecular charge transfer: Benchmarking CDFT, FODFT, and FODFTB against high-level ab initio calculations

    NASA Astrophysics Data System (ADS)

    Kubas, Adam; Hoffmann, Felix; Heck, Alexander; Oberhofer, Harald; Elstner, Marcus; Blumberger, Jochen

    2014-03-01

    We introduce a database (HAB11) of electronic coupling matrix elements (Hab) for electron transfer in 11 π-conjugated organic homo-dimer cations. High-level ab inito calculations at the multireference configuration interaction MRCI+Q level of theory, n-electron valence state perturbation theory NEVPT2, and (spin-component scaled) approximate coupled cluster model (SCS)-CC2 are reported for this database to assess the performance of three DFT methods of decreasing computational cost, including constrained density functional theory (CDFT), fragment-orbital DFT (FODFT), and self-consistent charge density functional tight-binding (FODFTB). We find that the CDFT approach in combination with a modified PBE functional containing 50% Hartree-Fock exchange gives best results for absolute Hab values (mean relative unsigned error = 5.3%) and exponential distance decay constants β (4.3%). CDFT in combination with pure PBE overestimates couplings by 38.7% due to a too diffuse excess charge distribution, whereas the economic FODFT and highly cost-effective FODFTB methods underestimate couplings by 37.6% and 42.4%, respectively, due to neglect of interaction between donor and acceptor. The errors are systematic, however, and can be significantly reduced by applying a uniform scaling factor for each method. Applications to dimers outside the database, specifically rotated thiophene dimers and larger acenes up to pentacene, suggests that the same scaling procedure significantly improves the FODFT and FODFTB results for larger π-conjugated systems relevant to organic semiconductors and DNA.

  12. Some comments on mapping from disease-specific to generic health-related quality-of-life scales.

    PubMed

    Palta, Mari

    2013-01-01

    An article by Lu et al. in this issue of Value in Health addresses the mapping of treatment or group differences in disease-specific measures (DSMs) of health-related quality of life onto differences in generic health-related quality-of-life scores, with special emphasis on how the mapping is affected by the reliability of the DSM. In the proposed mapping, a factor analytic model defines a conversion factor between the scores as the ratio of factor loadings. Hence, the mapping applies to convert true underlying scales and has desirable properties facilitating the alignment of instruments and understanding their relationship in a coherent manner. It is important to note, however, that when DSM means or differences in mean DSMs are estimated, their mapping is still of a measurement error-prone predictor, and the correct conversion coefficient is the true mapping multiplied by the reliability of the DSM in the relevant sample. In addition, the proposed strategy for estimating the factor analytic mapping in practice requires assumptions that may not hold. We discuss these assumptions and how they may be the reason we obtain disparate estimates of the mapping factor in an application of the proposed methods to groups of patients. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. Detecting and overcoming systematic errors in genome-scale phylogenies.

    PubMed

    Rodríguez-Ezpeleta, Naiara; Brinkmann, Henner; Roure, Béatrice; Lartillot, Nicolas; Lang, B Franz; Philippe, Hervé

    2007-06-01

    Genome-scale data sets result in an enhanced resolution of the phylogenetic inference by reducing stochastic errors. However, there is also an increase of systematic errors due to model violations, which can lead to erroneous phylogenies. Here, we explore the impact of systematic errors on the resolution of the eukaryotic phylogeny using a data set of 143 nuclear-encoded proteins from 37 species. The initial observation was that, despite the impressive amount of data, some branches had no significant statistical support. To demonstrate that this lack of resolution is due to a mutual annihilation of phylogenetic and nonphylogenetic signals, we created a series of data sets with slightly different taxon sampling. As expected, these data sets yielded strongly supported but mutually exclusive trees, thus confirming the presence of conflicting phylogenetic and nonphylogenetic signals in the original data set. To decide on the correct tree, we applied several methods expected to reduce the impact of some kinds of systematic error. Briefly, we show that (i) removing fast-evolving positions, (ii) recoding amino acids into functional categories, and (iii) using a site-heterogeneous mixture model (CAT) are three effective means of increasing the ratio of phylogenetic to nonphylogenetic signal. Finally, our results allow us to formulate guidelines for detecting and overcoming phylogenetic artefacts in genome-scale phylogenetic analyses.

  14. Factor structure of the Childhood Autism Rating Scale as per DSM-5.

    PubMed

    Park, Eun-Young; Kim, Joungmin

    2016-02-01

    The DSM-5 recently proposed new diagnostic criteria for autism spectrum disorder (ASD). Although many new or updated tools have been developed since the DSM-IV was published in 1994, the Childhood Autism Rating Scale (CARS) has been used consistently in ASD diagnosis and research due to its technical adequacy, cost-effectiveness, and practicality. Additionally, items in the CARS did not alter following the release of the revised DSM-IV because the CARS factor structure was found to be consistent with the revised criteria after factor analysis. For that reason, in this study confirmatory factor analysis was used to identify the factor structure of the CARS. Participants (n = 150) consisted of children with an ASD diagnosis or who met the criteria for broader autism or emotional/behavior disorder with comorbid disorders such as attention-deficit hyperactivity disorder, bipolar disorder, intellectual or developmental disabilities. Previous studies used one-, two-, and four-factor models, all of which we examined to confirm the best-fit model on confirmatory factor analysis. Appropriate comparative fit indices and root mean square errors were obtained for all four models. The two-factor model, based on DSM-5 criteria, was the most valid and reliable. The inter-item consistency of the CARS was 0.926 and demonstrated adequate reliability, thereby supporting the validity and reliability of the two-factor model of CARS. Although CARS was developed prior to the introduction of DSM-5, its psychometric properties, conceptual relevance, and flexible administration procedures support its continued role as a screening device in the diagnostic decision-making process. © 2015 Japan Pediatric Society.

  15. The Development of an Instrument for Measuring Healing

    PubMed Central

    Meza, James Peter; Fahoome, Gail F.

    2008-01-01

    PURPOSE Our lack of ability to measure healing attributes impairs our ability to research the topic. The specific aim of this project is to describe the psychological and social construct of healing and to create a valid and reliable measurement scale for attributes of healing. METHODS A content expert conducted a domain analysis examining the existing literature of midrange theories of healing. Theme saturation of content sampling was ensured by brainstorming more than 220 potential items. Selection of items was sequential: pile sorting and data reduction, with factor analysis of a mailed 54-item questionnaire. Criterion validity (convergent and divergent) and temporal reliability were established using a second mailing of the development version of the instrument. Construct validity was judged with structural equation modeling for goodness of fit. RESULTS Cronbach’s α of the original questionnaire was .869 and the final scale was .862. The test-retest reliability was .849. Eigenvalues for the 2 factors were 8 and 4, respectively. Divergent and convergent validity using the Spann-Fischer Codependency Scale and SF-36 mental health and emotional subscales were consistent with predictions. The root mean square error of approximation was 0.066 and Bentler’s Comparative Fit Index was 0.871. Root mean square residual was 0.102. CONCLUSIONS We developed a valid and reliable measurement scale for attributes of healing, which we named the Self-Integration Scale v 2.1. By creating a new variable, new areas of research in humanistic health care are possible. PMID:18626036

  16. Cross-cultural application of the Korean version of the European Organization for Research and Treatment of Cancer quality of life questionnaire cervical cancer module.

    PubMed

    Shin, Dong Wook; Ahn, Eunmi; Kim, Yong-Man; Kang, Sokbom; Kim, Byoung-Gie; Seong, Seok Ju; Cha, Soon Do; Park, Chan-Yong; Yun, Young Ho

    2009-01-01

    This study was conducted to evaluate the psychometric properties of the Korean version of the Quality of Life questionnaire cervical cancer module (QLQ-CX24), developed by the European Organization for Research and Treatment of Cancer (EORTC). The EORTC QLQ-CX24 and the core questionnaire (the EORTC QLQ-C30) were administered to 860 Korean disease-free survivors of cervical cancer and 494 female control subjects from the general Korean population. The construct reliability and validity of the EORTC QLQ-CX24 questionnaire were assessed via factor analysis, multitrait scaling analyses and known group comparisons. Factor structure of the Korean version of the EORTC QLQ-CX24 questionnaire agreed with the originally hypothesized scale structure. Scale reliability was confirmed by Cronbach's alpha coefficients for internal consistency, which ranged from 0.78 to 0.87. Convergent and discriminant validity was confirmed by multitrait scaling analysis, which revealed scaling errors of 0.9. The clinical validity of the Korean version of the EORTC QLQ-CX24 was demonstrated by the ability to discriminate among controls and patient subgroups of different stages, treatments and overall health status. The Korean version of the EORTC QLQ-CX24 was found to be a reliable and a valid measure of quality of life among survivors of cervical cancer when administered in a large survey setting. Copyright 2009 S. Karger AG, Basel.

  17. Perceived parental rearing style in childhood: internal structure and concurrent validity on the Egna Minnen Beträffande Uppfostran--Child Version in clinical settings.

    PubMed

    Penelo, Eva; Viladrich, Carme; Domènech, Josep M

    2010-01-01

    We provide the first validation data of the Spanish version of the Egna Minnen Beträffande Uppfostran--Child Version (EMBU-C) in a clinical context. The EMBU-C is a 41-item self-report questionnaire that assesses perceived parental rearing style in children, comprising 4 subscales (rejection, emotional warmth, control attempts/overprotection, and favoring subjects). The test was administered to a clinical sample of 174 Spanish psychiatric outpatients aged 8 to 12. Confirmatory factor analyses were performed, analyzing the children's reports about their parents' rearing style. The results were almost equivalent for father's and mother's ratings. Confirmatory factor analysis yielded an acceptable fit to data of the 3-factor model when removing the items of the favoring subjects scale (root mean squared error of approximation <0.07). Satisfactory internal consistency reliability was obtained for 2 of the 3 scales, rejection and emotional warmth (Cronbach alpha >.73), whereas control attempts scale showed lower values, as in previous studies. The influence of sex (of children and parents) on scale scores was inappreciable and children tended to perceive their parents as progressively less warm as they grew older. As predicted, the scores for rejection and emotional warmth were related to bad relationships with parents, absence of family support, harsh discipline, and lack of parental supervision. The Spanish version of EMBU-C can be used with psychometric guarantees to identify rearing style in psychiatric outpatients because evidences of quality in this setting match those obtained in community samples. Copyright 2010 Elsevier Inc. All rights reserved.

  18. Risk factors for refractive errors in primary school children (6-12 years old) in Nakhon Pathom Province.

    PubMed

    Yingyong, Penpimol

    2010-11-01

    Refractive error is one of the leading causes of visual impairment in children. An analysis of risk factors for refractive error is required to reduce and prevent this common eye disease. To identify the risk factors associated with refractive errors in primary school children (6-12 year old) in Nakhon Pathom province. A population-based cross-sectional analytic study was conducted between October 2008 and September 2009 in Nakhon Pathom. Refractive error, parental refractive status, and hours per week of near activities (studying, reading books, watching television, playing with video games, or working on the computer) were assessed in 377 children who participated in this study. The most common type of refractive error in primary school children was myopia. Myopic children were more likely to have parents with myopia. Children with myopia spend more time at near activities. The multivariate odds ratio (95% confidence interval)for two myopic parents was 6.37 (2.26-17.78) and for each diopter-hour per week of near work was 1.019 (1.005-1.033). Multivariate logistic regression models show no confounding effects between parental myopia and near work suggesting that each factor has an independent association with myopia. Statistical analysis by logistic regression revealed that family history of refractive error and hours of near-work were significantly associated with refractive error in primary school children.

  19. Scaling up and error analysis of transpiration for Populus euphratica in a desert riparian forest

    NASA Astrophysics Data System (ADS)

    Si, J.; Li, W.; Feng, Q.

    2013-12-01

    Water consumption information of the forest stand is the most important factor for regional water resources management. However, water consumption of individual trees are usually measured based on the limited sample trees , so, it is an important issue how to realize eventual scaling up of data from a series of sample trees to entire stand. Estimation of sap flow flux density (Fd) and stand sapwood area (AS-stand) are among the most critical factors for determining forest stand transpiration using sap flow measurement. To estimate Fd, the various links in sap flow technology have great impact on the measurement of sap flow, to estimate AS-stand, an appropriate indirect technique for measuring each tree sapwood area (AS-tree) is required, because it is impossible to measure the AS-tree of all trees in a forest stand. In this study, Fd was measured in 2 mature P. euphratic trees at several radial depths, 0~10, 10~30mm, using sap flow sensors with the heat ratio method, the relationship model between AS-tree and stem diameter (DBH), growth model of AS-tree were established, using investigative original data of DBH, tree-age, and AS-tree. The results revealed that it can achieve scaling up of transpiration from sample trees to entire forest stand using AS-tree and Fd, however, the transpiration of forest stand (E) will be overvalued by 12.6% if using Fd of 0~10mm, and it will be underestimated by 25.3% if using Fd of 10~30mm, it implied that major uncertainties in mean stand Fd estimations are caused by radial variations in Fd. E will be obviously overvalued when the AS-stand is constant, this result imply that it is the key to improve the prediction accuracy that how to simulate the AS-stand changes in the day scale; They also showed that the potential errors in transpiration with a sample size of approximately ≥30 were almost stable for P.euphrtica, this suggests that to make an allometric equation it might be necessary to sample at least 30 trees.

  20. E-prescribing errors in community pharmacies: exploring consequences and contributing factors.

    PubMed

    Odukoya, Olufunmilola K; Stone, Jamie A; Chui, Michelle A

    2014-06-01

    To explore types of e-prescribing errors in community pharmacies and their potential consequences, as well as the factors that contribute to e-prescribing errors. Data collection involved performing 45 total hours of direct observations in five pharmacies. Follow-up interviews were conducted with 20 study participants. Transcripts from observations and interviews were subjected to content analysis using NVivo 10. Pharmacy staff detected 75 e-prescription errors during the 45 h observation in pharmacies. The most common e-prescribing errors were wrong drug quantity, wrong dosing directions, wrong duration of therapy, and wrong dosage formulation. Participants estimated that 5 in 100 e-prescriptions have errors. Drug classes that were implicated in e-prescribing errors were antiinfectives, inhalers, ophthalmic, and topical agents. The potential consequences of e-prescribing errors included increased likelihood of the patient receiving incorrect drug therapy, poor disease management for patients, additional work for pharmacy personnel, increased cost for pharmacies and patients, and frustrations for patients and pharmacy staff. Factors that contribute to errors included: technology incompatibility between pharmacy and clinic systems, technology design issues such as use of auto-populate features and dropdown menus, and inadvertently entering incorrect information. Study findings suggest that a wide range of e-prescribing errors is encountered in community pharmacies. Pharmacists and technicians perceive that causes of e-prescribing errors are multidisciplinary and multifactorial, that is to say e-prescribing errors can originate from technology used in prescriber offices and pharmacies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. E-Prescribing Errors in Community Pharmacies: Exploring Consequences and Contributing Factors

    PubMed Central

    Stone, Jamie A.; Chui, Michelle A.

    2014-01-01

    Objective To explore types of e-prescribing errors in community pharmacies and their potential consequences, as well as the factors that contribute to e-prescribing errors. Methods Data collection involved performing 45 total hours of direct observations in five pharmacies. Follow-up interviews were conducted with 20 study participants. Transcripts from observations and interviews were subjected to content analysis using NVivo 10. Results Pharmacy staff detected 75 e-prescription errors during the 45 hour observation in pharmacies. The most common e-prescribing errors were wrong drug quantity, wrong dosing directions, wrong duration of therapy, and wrong dosage formulation. Participants estimated that 5 in 100 e-prescriptions have errors. Drug classes that were implicated in e-prescribing errors were antiinfectives, inhalers, ophthalmic, and topical agents. The potential consequences of e-prescribing errors included increased likelihood of the patient receiving incorrect drug therapy, poor disease management for patients, additional work for pharmacy personnel, increased cost for pharmacies and patients, and frustrations for patients and pharmacy staff. Factors that contribute to errors included: technology incompatibility between pharmacy and clinic systems, technology design issues such as use of auto-populate features and dropdown menus, and inadvertently entering incorrect information. Conclusion Study findings suggest that a wide range of e-prescribing errors are encountered in community pharmacies. Pharmacists and technicians perceive that causes of e-prescribing errors are multidisciplinary and multifactorial, that is to say e-prescribing errors can originate from technology used in prescriber offices and pharmacies. PMID:24657055

  2. Outdoor surface temperature measurement: ground truth or lie?

    NASA Astrophysics Data System (ADS)

    Skauli, Torbjorn

    2004-08-01

    Contact surface temperature measurement in the field is essential in trials of thermal imaging systems and camouflage, as well as for scene modeling studies. The accuracy of such measurements is challenged by environmental factors such as sun and wind, which induce temperature gradients around a surface sensor and lead to incorrect temperature readings. In this work, a simple method is used to test temperature sensors under conditions representative of a surface whose temperature is determined by heat exchange with the environment. The tested sensors are different types of thermocouples and platinum thermistors typically used in field trials, as well as digital temperature sensors. The results illustrate that the actual measurement errors can be much larger than the specified accuracy of the sensors. The measurement error typically scales with the difference between surface temperature and ambient air temperature. Unless proper care is taken, systematic errors can easily reach 10% of this temperature difference, which is often unacceptable. Reasonably accurate readings are obtained using a miniature platinum thermistor. Thermocouples can perform well on bare metal surfaces if the connection to the surface is highly conductive. It is pointed out that digital temperature sensors have many advantages for field trials use.

  3. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less

  4. Development of a checklist of short-term and long-term psychological symptoms associated with ketamine use.

    PubMed

    Fan, Ni; Xu, Ke; Ning, Yuping; Wang, Daping; Ke, Xiaoyin; Ding, Yi; Sun, Bin; Zhou, Chao; Deng, Xuefeng; Rosenheck, Robert; He, Hongbo

    2015-06-25

    Ketamine is an increasingly popular drug of abuse in China but there is currently no method for classifying the psychological effects of ketamine in individuals with ketamine dependence. Develop a scale that characterizes the acute and long-term psychological effects of ketamine use among persons with ketamine dependence. We developed a preliminary symptom checklist with 35 dichotomous ('yes' or 'no') items about subjective feelings immediately after ketamine use and about perceived long-term effects of ketamine use that was administered to 187 inpatients with ketamine dependence recruited from two large hospitals in Guangzhou, China. Exploratory factor analysis (EFA) was conducted on a randomly selected half of thesample to reduce the items and to identify underlying constructs. Confirmatory factor analysis (CFA) was conducted on the second half of the sample to assess the robustness of the identified factor structure. Among the 35 symptoms, the most-reported acute effects were 'floating or circling' (94%), 'euphoric when listening to rousing music' (86%), and 'feeling excited, talkative, and full of energy' (67%). The mostreported long-term symptoms were 'memory impairment' (93%), 'personality changes' (86%), and 'slowed reactions' (81%). EFA resulted in a final 22-item scale best modelled by a four-factor model: two factors representing chronic symptoms (social withdrawal and sleep disturbances), one about acute psychoticlike symptoms, and one that combined acute drug-related euphoria and longer-term decreased libido. CFA showed that these 4 factors accounted for 50% of the total variance of the final 22-item scale and that the model fit was fair (Goodness of Fit Index, GIF=83.3%; Root Mean Square Error of Approximation, RMSEA=0.072). A four-factor model including social withdrawal, sleep disturbance, psychotic-like symptoms, and euphoria at the time of drug use provides a fair description of the short-term and long-term psychological symptoms associated with ketamine use. Future work on the 22-item version of the scale with larger samples is needed to confirm the validity of this 4-factor structure, to assess the scale's test-retest reliability, and to determine whether or not it can be useful in the differential diagnosis and monitoring of treatment of individuals with ketamine dependence.

  5. Modeling micro-droplet formation in near-field electrohydrodynamic jet printing

    NASA Astrophysics Data System (ADS)

    Popell, George Colin

    Near-field electrohydrodynamic jet (E-jet) printing has recently gained significant interest within the manufacturing research community because of its ability to produce micro/sub-micron-scale droplets using a wide variety of inks and substrates. However, the process currently operates in open-loop and as a result suffers from unpredictable printing quality. The use of physics-based, control-oriented process models is expected to enable closed-loop control of this printing technique. The objective of this research is to perform a fundamental study of the substrate-side droplet shape-evolution in near-field E-jet printing and to develop a physics-based model of the same that links input parameters such as voltage magnitude and ink properties to the height and diameter of the printed droplet. In order to achieve this objective, a synchronized high-speed imaging and substrate-side current-detection system was used implemented to enable a correlation between the droplet shape parameters and the measured current signal. The experimental data reveals characteristic process signatures and droplet spreading regimes. The results of these studies are then used as the basis for a model that predicts the droplet diameter and height using the measured current signal as the input. A unique scaling factor based on the measured current signal is used in this model instead of relying on empirical scaling laws found in literature. For each of the three inks tested in this study, the average absolute error in the model predictions is under 4.6% for diameter predictions and under 10.6% for height predictions of the steady-state droplet. While printing under non-conducive ambient conditions of low humidity and high temperatures, the use of the environmental correction factor in the model is seen to result in average absolute errors of 10.35% and 12.5% for diameter and height predictions.

  6. Psychometric Properties of the Persian Version of Death Depression Scale-Revised in Iranian Patients with Acute Myocardial Infarction.

    PubMed

    Sharif Nia, Hamid; Pahlevan Sharif, Saeed; Lehto, Rebecca H; Allen, Kelly A; Goudarzian, Amir Hossein; Yaghoobzadeh, Ameneh; Soleimani, Mohammad Ali

    2017-07-01

    Objective: Limited research has examined the psychometric properties of death depression scales in Persian populations with cardiac disease despite the need for valid assessment tools for evaluating depressive symptoms in patients with life-limiting chronic conditions. The present study aimed at evaluating the reliability and validity of the Persian Version of Death Depression Scale - Revised (DDS-R) in Iranian patients who had recent acute myocardial infarction (AMI). Method: This psychometric study was conducted with a convenience sample of 407 patients with AMI diagnosis who completed the Persian version of the DDS-R. The face, content, and construct validity of the scale were ascertained. Internal consistency, test-retest, and construct reliability (CR) were used to assess reliability of the Persian Version of DDS-R. Results: Based on maximum likelihood exploratory factor analysis and consideration of conceptual meaning, a 4-factor solution was identified, explaining 75.89% of the total variance. Goodness-of-fit indices (GFI), Comparative Fit Index (CFI), Normed Fit Index (NFI), Incremental Fit Index (IFI), and Root Mean Square Error of Approximation (RMSEA) in the final DDS-R structure demonstrated the adequacy of the 4-domain structure. The internal consistency, construct reliability, and Intra-class Correlation Coefficients (ICC) were greater than .70. Conclusion: The DDS-R was found to be a valid and reliable assessment tool for evaluating death depression symptoms in Iranian patients with AMI.

  7. [Psychometric properties of the third version of family adaptability and cohesion evaluation scales (FACES-III): a study of peruvian adolescents].

    PubMed

    Bazo-Alvarez, Juan Carlos; Bazo-Alvarez, Oscar Alfredo; Aguila, Jeins; Peralta, Frank; Mormontoy, Wilfredo; Bennett, Ian M

    2016-01-01

    Our aim was to evaluate the psychometric properties of the FACES-III among Peruvian high school students. This is a psychometric cross-sectional study. A probabilistic sampling was applied, defined by three stages: stratum one (school), stratum two (grade) and cluster (section). The participants were 910 adolescent students of both sexes, between 11 and 18 years of age. The instrument was also the object of study: the Olson's FACES-III. The analysis included a review of the structure / construct validity of the measure by factor analysis and assessment of internal consistency (reliability). The real-cohesion scale had moderately high reliability (Ω=.85) while the real-flexibility scale had moderate reliability (Ω=.74). The reliability found for the ideal-cohesion was moderately high (Ω=.89) like for the scale of ideal-flexibility (Ω=.86). Construct validity was confirmed by the goodness of fit of a two factor model (cohesion and flexibility) with 10 items each [Adjusted goodness of fit index (AGFI) = 0.96; Expected Cross Validation Index (ECVI) = 0.87; Normed fit index (NFI) = 0.93; Goodness of fit index (GFI) = 0.97; Root mean square error of approximation (RMSEA) = 0.06]. FACES-III has sufficient reliability and validity to be used in Peruvian adolescents for the purpose of group or individual assessment.

  8. Evaluating the Utility of Adjoint-based Inverse Modeling with Aircraft and Surface Measurements during ARCTAS-CARB to Constrain Wildfire Emissions of Black Carbon

    NASA Astrophysics Data System (ADS)

    Henze, D. K.; Guerrette, J.; Bousserez, N.

    2016-12-01

    Wildfires contribute significantly to regional haze events globally, and they are potentially becoming more commonplace with increasing droughts due to climate change. Aerosol emissions from wildfires are highly uncertain, with global annual totals varying by a factor of 2 to 3 and regional rates varying by up to a factor of 10. At the high resolution required to predict PM2.5 exposure events, this variance is attributable to differences in methodology, differing land cover datasets, spatial variation in fire locations, and limited understanding of fast transient fire behavior. Here we apply an adjoint-based online chemical inverse modeling tool, WRFDA-Chem, to constrain black carbon aerosol (BC) emissions from fires during the 2008 ARCTAS-CARB field campaign. We identify several weaknesses in the prior diurnal distribution of emissions, including a missing early morning emission peak associated with local, persistent, large-scale forest fires. On 22 June, 2008, aircraft observations are able to reduce the spread between FINNv1.0 and QFEDv2.4r8 from ×3.5 to ×2.1. On 23 and 24 June, the spread is reduced from ×3.4 to ×1.4. Using posterior error estimates, we found that emission variance improvements are limited to a small footprint surrounding the measurements. Relative BB emission variances are reduced by up to 35% near aircraft flight paths and up to 60% near IMPROVE surface sites. Due to the spatial variation of observations on multiple days, and the heterogeneous biomass burning errors on daily scales, cross-validation was not successful. Future high-resolution measurements need to be carefully planned to characterize biomass burning emission errors and control for day-to-day variation. In general, the 4D-Var inversion framework would benefit from reduced wall-time. For the problem presented, incremental 4D-Var requires 20 hours on 96 cores to reach practical optimization convergence and generate the posterior covariance matrix for a 24-hour assimilation window. We will present initial computational comparisons with a recently developed method to parallelize those calculations, which will reduce wall-time by a factor of 5 or more for all WRFDA 4D-Var applications.

  9. Cognitive error as the most frequent contributory factor in cases of medical injury: a study on verdict's judgment among closed claims in Japan.

    PubMed

    Tokuda, Yasuharu; Kishida, Naoki; Konishi, Ryota; Koizumi, Shunzo

    2011-03-01

    Cognitive errors in the course of clinical decision-making are prevalent in many cases of medical injury. We used information on verdict's judgment from closed claims files to determine the important cognitive factors associated with cases of medical injury. Data were collected from claims closed between 2001 to 2005 at district courts in Tokyo and Osaka, Japan. In each case, we recorded all the contributory cognitive, systemic, and patient-related factors judged in the verdicts to be causally related to the medical injury. We also analyzed the association between cognitive factors and cases involving paid compensation using a multivariable logistic regression model. Among 274 cases (mean age 49 years old; 45% women), there were 122 (45%) deaths and 67 (24%) major injuries (incomplete recovery within a year). In 103 cases (38%), the verdicts ordered hospitals to pay compensation (median; 8,000,000 Japanese Yen). An error in judgment (199/274, 73%) and failure of vigilance (177/274, 65%) were the most prevalent causative cognitive factors, and error in judgment was also significantly associated with paid compensation (odds ratio, 1.9; 95% confidence interval [CI], 1.0-3.4). Systemic causative factors including poor teamwork (11/274, 4%) and technology failure (5/274, 2%) were less common. The closed claims analysis based on verdict's judgment showed that cognitive errors were common in cases of medical injury, with an error in judgment being most prevalent and closely associated with compensation payment. Reduction of this type of error is required to produce safer healthcare. 2010 Society of Hospital Medicine.

  10. Reduction of medication errors related to sliding scale insulin by the introduction of a standardized order sheet.

    PubMed

    Harada, Saki; Suzuki, Akio; Nishida, Shohei; Kobayashi, Ryo; Tamai, Sayuri; Kumada, Keisuke; Murakami, Nobuo; Itoh, Yoshinori

    2017-06-01

    Insulin is frequently used for glycemic control. Medication errors related to insulin are a common problem for medical institutions. Here, we prepared a standardized sliding scale insulin (SSI) order sheet and assessed the effect of its introduction. Observations before and after the introduction of the standardized SSI template were conducted at Gifu University Hospital. The incidence of medication errors, hyperglycemia, and hypoglycemia related to SSI were obtained from the electronic medical records. The introduction of the standardized SSI order sheet significantly reduced the incidence of medication errors related to SSI compared with that prior to its introduction (12/165 [7.3%] vs 4/159 [2.1%], P = .048). However, the incidence of hyperglycemia (≥250 mg/dL) and hypoglycemia (≤50 mg/dL) in patients who received SSI was not significantly different between the 2 groups. The introduction of the standardized SSI order sheet reduced the incidence of medication errors related to SSI. © 2016 John Wiley & Sons, Ltd.

  11. Measurement-free implementations of small-scale surface codes for quantum-dot qubits

    NASA Astrophysics Data System (ADS)

    Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.

    2018-01-01

    The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.

  12. A novel method of calibrating a MEMS inertial reference unit on a turntable under limited working conditions

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Liang, Shufang; Yang, Yanqiang

    2017-10-01

    Micro-electro-mechanical systems (MEMS) inertial measurement devices tend to be widely used in inertial navigation systems and have quickly emerged on the market due to their characteristics of low cost, high reliability and small size. Calibration is the most effective way to remove the deterministic error of an inertial reference unit (IRU), which in this paper consists of three orthogonally mounted MEMS gyros. However, common testing methods in the lab cannot predict the corresponding errors precisely when the turntable’s working condition is restricted. In this paper, the turntable can only provide a relatively small rotation angle. Moreover, the errors must be compensated exactly because of the great effect caused by the high angular velocity of the craft. To deal with this question, a new method is proposed to evaluate the MEMS IRU’s performance. In the calibration procedure, a one-axis table that can rotate a limited angle in the form of a sine function is utilized to provide the MEMS IRU’s angular velocity. A new algorithm based on Fourier series is designed to calculate the misalignment and scale factor errors. The proposed method is tested in a set of experiments, and the calibration results are compared to a traditional calibration method performed under normal working conditions to verify their correctness. In addition, a verification test in the given rotation speed is implemented for further demonstration.

  13. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  14. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  15. Optimal error analysis of the intraseasonal convection due to uncertainties of the sea surface temperature in a coupled model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojing; Tang, Youmin; Yao, Zhixiong

    2017-04-01

    The predictability of the convection related to the Madden-Julian Oscillation (MJO) is studied using a coupled model CESM (Community Earth System Model) and the climatically relevant singular vector (CSV) approach. The CSV approach is an ensemble-based strategy to calculate the optimal initial error on climate scale. In this study, we focus on the optimal initial error of the sea surface temperature in Indian Ocean, where is the location of the MJO onset. Six MJO events are chosen from the 10 years model simulation output. The results show that the large values of the SVs are mainly located in the bay of Bengal and the south central IO (around (25°S, 90°E)), which is a meridional dipole-like pattern. The fast error growth of the CSVs have important impacts on the prediction of the convection related to the MJO. The initial perturbations with the SV pattern result in the deep convection damping more quickly in the east Pacific Ocean. Moreover, the sensitivity studies of the CSVs show that different initial fields do not affect the CSVs obviously, while the perturbation domain is a more responsive factor to the CSVs. The rapid growth of the CSVs is found to be related to the west bay of Bengal, where the wind stress starts to be perturbed due to the CSV initial error. These results contribute to the establishment of an ensemble prediction system, as well as the optimal observation network. In addition, the analysis of the error growth can provide us some enlightment about the relationship between SST and the intraseasonal convection related to the MJO.

  16. Willingness of nurses to report medication administration errors in southern Taiwan: a cross-sectional survey.

    PubMed

    Lin, Yu-Hua; Ma, Su-mei

    2009-01-01

    Underreporting of medication administering errors (MAEs) is a threat to the quality of nursing care. The reasons for MAEs are complex and vary by health professional and institution. The purpose of this study was to explore the prevalence of MAEs and the willingness of nurses to report them. A cross-sectional study was conducted involving a survey of 14 medical surgical hospitals in southern Taiwan. Nurses voluntarily participated in this study. A structured questionnaire was completed by 605 participants. Data were collected from February 1, 2005 to March 15, 2005 using the following instruments: MAEs Unwillingness to Report Scale, Medication Errors Etiology Questionnaire, and Personal Features Questionnaire. One additional question was used to identify the willingness of nurses to report medication errors: "When medication errors occur, should they be reported to the department?" This question helped to identify the willingness or lack thereof, to report incident errors. The results indicated that 66.9% of the nurses reported experiencing MAEs and 87.7% of the nurses had a willingness to report the MAEs if there were no consequences for reporting. The nurses' willingness to report MAEs differed by job position, nursing grade, type of hospital, and hospital funding. The final logistic regression model demonstrated hospital funding to be the only statistically significant factor. The odds of a willingness to report MAEs increased 2.66-fold in private hospitals (p = 0.032, CI = 1.09 to 6.49), and 3.28 in nonprofit hospitals (p = 0.00, CI = 1.73 to 6.21) when compared to public hospitals. This study demonstrates that reporting of MAEs should be anonymous and without negative consequences in order to monitor and guide improvements in hospital medication systems.

  17. Decadal-scale sensitivity of Northeast Greenland ice flow to errors in surface mass balance using ISSM

    NASA Astrophysics Data System (ADS)

    Schlegel, N.-J.; Larour, E.; Seroussi, H.; Morlighem, M.; Box, J. E.

    2013-06-01

    The behavior of the Greenland Ice Sheet, which is considered a major contributor to sea level changes, is best understood on century and longer time scales. However, on decadal time scales, its response is less predictable due to the difficulty of modeling surface climate, as well as incomplete understanding of the dynamic processes responsible for ice flow. Therefore, it is imperative to understand how modeling advancements, such as increased spatial resolution or more comprehensive ice flow equations, might improve projections of ice sheet response to climatic trends. Here we examine how a finely resolved climate forcing influences a high-resolution ice stream model that considers longitudinal stresses. We simulate ice flow using a two-dimensional Shelfy-Stream Approximation implemented within the Ice Sheet System Model (ISSM) and use uncertainty quantification tools embedded within the model to calculate the sensitivity of ice flow within the Northeast Greenland Ice Stream to errors in surface mass balance (SMB) forcing. Our results suggest that the model tends to smooth ice velocities even when forced with extreme errors in SMB. Indeed, errors propagate linearly through the model, resulting in discharge uncertainty of 16% or 1.9 Gt/yr. We find that mass flux is most sensitive to local errors but is also affected by errors hundreds of kilometers away; thus, an accurate SMB map of the entire basin is critical for realistic simulation. Furthermore, sensitivity analyses indicate that SMB forcing needs to be provided at a resolution of at least 40 km.

  18. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  19. Scaling depth-induced wave-breaking in two-dimensional spectral wave models

    NASA Astrophysics Data System (ADS)

    Salmon, J. E.; Holthuijsen, L. H.; Zijlema, M.; van Vledder, G. Ph.; Pietrzak, J. D.

    2015-03-01

    Wave breaking in shallow water is still poorly understood and needs to be better parameterized in 2D spectral wave models. Significant wave heights over horizontal bathymetries are typically under-predicted in locally generated wave conditions and over-predicted in non-locally generated conditions. A joint scaling dependent on both local bottom slope and normalized wave number is presented and is shown to resolve these issues. Compared to the 12 wave breaking parameterizations considered in this study, this joint scaling demonstrates significant improvements, up to ∼50% error reduction, over 1D horizontal bathymetries for both locally and non-locally generated waves. In order to account for the inherent differences between uni-directional (1D) and directionally spread (2D) wave conditions, an extension of the wave breaking dissipation models is presented. By including the effects of wave directionality, rms-errors for the significant wave height are reduced for the best performing parameterizations in conditions with strong directional spreading. With this extension, our joint scaling improves modeling skill for significant wave heights over a verification data set of 11 different 1D laboratory bathymetries, 3 shallow lakes and 4 coastal sites. The corresponding averaged normalized rms-error for significant wave height in the 2D cases varied between 8% and 27%. In comparison, using the default setting with a constant scaling, as used in most presently operating 2D spectral wave models, gave equivalent errors between 15% and 38%.

  20. Associations between task, training and social environmental factors and error types involved in rail incidents and accidents.

    PubMed

    Read, Gemma J M; Lenné, Michael G; Moss, Simon A

    2012-09-01

    Rail accidents can be understood in terms of the systemic and individual contributions to their causation. The current study was undertaken to determine whether errors and violations are more often associated with different local and organisational factors that contribute to rail accidents. The Contributing Factors Framework (CFF), a tool developed for the collection and codification of data regarding rail accidents and incidents, was applied to a sample of investigation reports. In addition, a more detailed categorisation of errors was undertaken. Ninety-six investigation reports into Australian accidents and incidents occurring between 1999 and 2008 were analysed. Each report was coded independently by two experienced coders. Task demand factors were significantly more often associated with skill-based errors, knowledge and training deficiencies significantly associated with mistakes, and violations significantly linked to social environmental factors. Copyright © 2012 Elsevier Ltd. All rights reserved.

Top