Sample records for coefficient criterion applied

  1. The Mohr-Coulomb criterion for intact rock strength and friction - a re-evaluation and consideration of failure under polyaxial stresses

    NASA Astrophysics Data System (ADS)

    Hackston, A.; Rutter, E.

    2015-12-01

    Abstract Darley Dale and Pennant sandstones were tested under conditions of both axisymmetric shortening and extension normal to bedding. These are the two extremes of loading under polyaxial stress conditions. Failure under generalized stress conditions can be predicted from the Mohr-Coulomb failure criterion under axisymmetric compression conditions provided the best form of polyaxial failure criterion is known. The sandstone data are best reconciled using the Mogi (1967) empirical criterion. Fault plane orientations produced vary greatly with respect to the maximum compression direction in the two loading configurations. The normals to the Mohr-Coulomb failure envelopes do not predict the orientations of the fault planes eventually produced. Frictional sliding on variously inclined sawcuts and failure surfaces produced in intact rock samples was also investigated. Friction coefficient is not affected by fault plane orientation in a given loading configuration, but friction coefficients in extension were systematically lower than in compression for both rock types and could be reconciled by a variant on the Mogi (1967) failure criterion. Friction data for these and other porous sandstones accord well with the Byerlee (1977) generalization about rock friction being largely independent of rock type. For engineering and geodynamic modelling purposes, the stress-state dependent friction coefficient should be used for sandstones, but it is not known to what extent this might apply to other rock types.

  2. Convergence behavior of delayed discrete cellular neural network without periodic coefficients.

    PubMed

    Wang, Jinling; Jiang, Haijun; Hu, Cheng; Ma, Tianlong

    2014-05-01

    In this paper, we study convergence behaviors of delayed discrete cellular neural networks without periodic coefficients. Some sufficient conditions are derived to ensure all solutions of delayed discrete cellular neural network without periodic coefficients converge to a periodic function, by applying mathematical analysis techniques and the properties of inequalities. Finally, some examples showing the effectiveness of the provided criterion are given. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. The Mohr-Coulomb criterion for intact rock strength and friction - a re-evaluation and consideration of failure under polyaxial stresses

    NASA Astrophysics Data System (ADS)

    Hackston, Abigail; Rutter, Ernest

    2016-04-01

    Darley Dale and Pennant sandstones were tested under conditions of both axisymmetric shortening and extension normal to bedding. These are the two extremes of loading under polyaxial stress conditions. Failure under generalized stress conditions can be predicted from the Mohr-Coulomb failure criterion under axisymmetric shortening conditions, provided the best form of polyaxial failure criterion is known. The sandstone data are best reconciled using the Mogi (1967) empirical criterion. Fault plane orientations produced vary greatly with respect to the maximum compressive stress direction in the two loading configurations. The normals to the Mohr-Coulomb failure envelopes do not predict the orientations of the fault planes eventually produced. Frictional sliding on variously inclined saw cuts and failure surfaces produced in intact rock samples was also investigated. Friction coefficient is not affected by fault plane orientation in a given loading configuration, but friction coefficients in extension were systematically lower than in compression for both rock types. Friction data for these and other porous sandstones accord well with the Byerlee (1978) generalization about rock friction being largely independent of rock type. For engineering and geodynamic modelling purposes, the stress-state-dependent friction coefficient should be used for sandstones, but it is not known to what extent this might apply to other rock types.

  4. [Validation of the Polish version of The Authentic Leadership Questionnaire for the of evaluation purpose of nursing management staff in national hospital wards].

    PubMed

    Sierpińska, Lidia

    2013-09-01

    The Authentic Leadership Questionnaire (ALQ) is a standardized research instrument for the evaluation of individual elements of leader's conduct which contribute to the authentic leadership. The application of this questionnaire in Polish conditions required to carry out the validation process. The aim of the study was to evaluate of validity and reliability of the Polish version of the American research instrument for the needs of evaluation of authenticity of leadership of the nursing management in Polish hospitals. The study covered 286 nurses (143 head nurses and 143 of their subordinates) employed in 45 hospitals in Poland. Theoretical validity of the instrument was evaluated using Fisher's transformation (r-Person correlation coefficient), while the criterion validity of the ALQ was evaluated using rho-Spearman correlation coefficient and the BOHIPSZO questionnaire. The reliability of the ALQ was assessed by means of the Cronbach-alpha coefficient. The ALQ questionnaire applied for the evaluation of authenticity of leadership of the nursing management in Polish hospital wards shows an acceptable theoretical and criterion validity and reliability (Cronbach-alpha coefficient 0.80). The Polish version of the ALQ is valid and reliable, and may be applied in studies concerning the evaluation of authenticity of leadership of the nursing management in Polish hospital wards.

  5. Procrustes Matching by Congruence Coefficients

    ERIC Educational Resources Information Center

    Korth, Bruce; Tucker, L. R.

    1976-01-01

    Matching by Procrustes methods involves the transformation of one matrix to match with another. A special least squares criterion, the congruence coefficient, has advantages as a criterion for some factor analytic interpretations. A Procrustes method maximizing the congruence coefficient is given. (Author/JKS)

  6. Application of a planetary wave breaking parameterization to stratospheric circulation statistics

    NASA Technical Reports Server (NTRS)

    Randel, William J.; Garcia, Rolando R.

    1994-01-01

    The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.

  7. Development of a Microsoft Excel tool for applying a factor retention criterion of a dimension coefficient to a survey on patient safety culture.

    PubMed

    Chien, Tsair-Wei; Shao, Yang; Jen, Dong-Hui

    2017-10-27

    Many quality-of-life studies have been conducted in healthcare settings, but few have used Microsoft Excel to incorporate Cronbach's α with dimension coefficient (DC) for describing a scale's characteristics. To present a computer module that can report a scale's validity, we manipulated datasets to verify a DC that can be used as a factor retention criterion for demonstrating its usefulness in a patient safety culture survey (PSC). Microsoft Excel Visual Basic for Applications was used to design a computer module for simulating 2000 datasets fitting the Rasch rating scale model. The datasets consisted of (i) five dual correlation coefficients (correl. = 0.3, 0.5, 0.7, 0.9, and 1.0) on two latent traits (i.e., true scores) following a normal distribution and responses to their respective 1/3 and 2/3 items in length; (ii) 20 scenarios of item lengths from 5 to 100; and (iii) 20 sample sizes from 50 to 1000. Each item containing 5-point polytomous responses was uniformly distributed in difficulty across a ± 2 logit range. Three methods (i.e., dimension interrelation ≥0.7, Horn's parallel analysis (PA) 95% confidence interval, and individual random eigenvalues) were used for determining one factor to retain. DC refers to the binary classification (1 as one factor and 0 as many factors) used for examining accuracy with the indicators sensitivity, specificity, and area under receiver operating characteristic curve (AUC). The scale's reliability and DC were simultaneously calculated for each simulative dataset. PSC real data were demonstrated with DC to interpret reports of the unit-based construct validity using the author-made MS Excel module. The DC method presented accurate sensitivity (=0.96), specificity (=0.92) with a DC criterion (≥0.70), and AUC (=0.98) that were higher than those of the two PA methods. PA combined with DC yielded good sensitivity (=0.96), specificity (=1.0) with a DC criterion (≥0.70), and AUC (=0.99). Advances in computer technology may enable healthcare users familiar with MS Excel to apply DC as a factor retention criterion for determining a scale's unidimensionality and evaluating a scale's quality.

  8. Blind equalization with criterion with memory nonlinearity

    NASA Astrophysics Data System (ADS)

    Chen, Yuanjie; Nikias, Chrysostomos L.; Proakis, John G.

    1992-06-01

    Blind equalization methods usually combat the linear distortion caused by a nonideal channel via a transversal filter, without resorting to the a priori known training sequences. We introduce a new criterion with memory nonlinearity (CRIMNO) for the blind equalization problem. The basic idea of this criterion is to augment the Godard [or constant modulus algorithm (CMA)] cost function with additional terms that penalize the autocorrelations of the equalizer outputs. Several variations of the CRIMNO algorithms are derived, with the variations dependent on (1) whether the empirical averages or the single point estimates are used to approximate the expectations, (2) whether the recent or the delayed equalizer coefficients are used, and (3) whether the weights applied to the autocorrelation terms are fixed or are allowed to adapt. Simulation experiments show that the CRIMNO algorithm, and especially its adaptive weight version, exhibits faster convergence speed than the Godard (or CMA) algorithm. Extensions of the CRIMNO criterion to accommodate the case of correlated inputs to the channel are also presented.

  9. An Application of Practical Strategies in Assessing the Criterion-Related Validity of Credentialing Examinations.

    ERIC Educational Resources Information Center

    Fidler, James R.

    1993-01-01

    Criterion-related validities of 2 laboratory practitioner certification examinations for medical technologists (MTs) and medical laboratory technicians (MLTs) were assessed for 81 MT and 70 MLT examinees. Validity coefficients are presented for both measures. Overall, summative ratings yielded stronger validity coefficients than ratings based on…

  10. The Reliability of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Livingston, Samuel A.

    The assumptions of the classical test-theory model are used to develop a theory of reliability for criterion-referenced measures which parallels that for norm-referenced measures. It is shown that the Spearman-Brown formula holds for criterion-referenced measures and that the criterion-referenced reliability coefficient can be used to correct…

  11. PIC-MCC analysis of electron multiplication in a cold-cathode Penning ion generator and its application to identify ignition voltage

    NASA Astrophysics Data System (ADS)

    Noori, H.; Ranjbar, A. H.; Mahjour-Shafiei, M.

    2017-11-01

    A cold-cathode Penning ion generator (PIG) has been developed in our laboratory to study the interaction of charged particles with matter. The ignition voltage was measured in the presence of the axial magnetic field in the range of 460-580 G. The performed measurements with stainless steel cathodes were in argon gas at pressure of 4 × 10-2 mbar. A PIC-MCC (particle-in-cell, Monte Carlo collision) technique has been used to calculate the electron multiplication coefficient M for various strength of axial magnetic field and applied voltage. An approach based on the coefficient M and the experimental values of the secondary electron emission coefficient γ, was proposed to determine the ignition voltages, theoretically. Applying the values of secondary coefficient γ leads to the average value of γM(V, B) to be = 1.05 ± 0.03 at the ignition of the PIG which satisfies the proposed ignition criterion. Thus, the ion-induced secondary electrons emitted from the cathode have dominant contribution to self-sustaining of the discharge process in a PIG.

  12. Rejection of the maternal electrocardiogram in the electrohysterogram signal.

    PubMed

    Leman, H; Marque, C

    2000-08-01

    The electrohysterogram (EHG) signal is mainly corrupted by the mother's electrocardiogram (ECG), which remains present despite analog filtering during acquisition. Wavelets are a powerful denoising tool and have already proved their efficiency on the EHG. In this paper, we propose a new method that employs the redundant wavelet packet transform. We first study wavelet packet coefficient histograms and propose an algorithm to automatically detect the histogram mode number. Using a new criterion, we compute a best basis adapted to the denoising. After EHG wavelet packet coefficient thresholding in the selected basis, the inverse transform is applied. The ECG seems to be very efficiently removed.

  13. Generalization of von Neumann analysis for a model of two discrete half-spaces: The acoustic case

    USGS Publications Warehouse

    Haney, M.M.

    2007-01-01

    Evaluating the performance of finite-difference algorithms typically uses a technique known as von Neumann analysis. For a given algorithm, application of the technique yields both a dispersion relation valid for the discrete time-space grid and a mathematical condition for stability. In practice, a major shortcoming of conventional von Neumann analysis is that it can be applied only to an idealized numerical model - that of an infinite, homogeneous whole space. Experience has shown that numerical instabilities often arise in finite-difference simulations of wave propagation at interfaces with strong material contrasts. These interface instabilities occur even though the conventional von Neumann stability criterion may be satisfied at each point of the numerical model. To address this issue, I generalize von Neumann analysis for a model of two half-spaces. I perform the analysis for the case of acoustic wave propagation using a standard staggered-grid finite-difference numerical scheme. By deriving expressions for the discrete reflection and transmission coefficients, I study under what conditions the discrete reflection and transmission coefficients become unbounded. I find that instabilities encountered in numerical modeling near interfaces with strong material contrasts are linked to these cases and develop a modified stability criterion that takes into account the resulting instabilities. I test and verify the stability criterion by executing a finite-difference algorithm under conditions predicted to be stable and unstable. ?? 2007 Society of Exploration Geophysicists.

  14. Analysis of Photothermal Characterization of Layered Materials: Design of Optimal Experiments

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    In this paper numerical calculations are presented for the steady-periodic temperature in layered materials and functionally-graded materials to simulate photothermal methods for the measurement of thermal properties. No laboratory experiments were performed. The temperature is found from a new Green s function formulation which is particularly well-suited to machine calculation. The simulation method is verified by comparison with literature data for a layered material. The method is applied to a class of two-component functionally-graded materials and results for temperature and sensitivity coefficients are presented. An optimality criterion, based on the sensitivity coefficients, is used for choosing what experimental conditions will be needed for photothermal measurements to determine the spatial distribution of thermal properties. This method for optimal experiment design is completely general and may be applied to any photothermal technique and to any functionally-graded material.

  15. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    PubMed Central

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  16. Treatment of boundary conditions in through-diffusion: A case study of (85)Sr(2+) diffusion in compacted illite.

    PubMed

    Glaus, M A; Aertsens, M; Maes, N; Van Laer, L; Van Loon, L R

    2015-01-01

    Valuable techniques to measure effective diffusion coefficients in porous media are an indispensable prerequisite for a proper understanding of the migration of chemical-toxic and radioactive micropollutants in the subsurface and geosphere. The present article discusses possible pitfalls and difficulties in the classical through-diffusion technique applied to situations where large diffusive fluxes of cations in compacted clay minerals or clay rocks occur. The results obtained from a benchmark study, in which the diffusion of (85)Sr(2+) tracer in compacted illite has been studied using different experimental techniques, are presented. It is shown that these techniques may yield valuable results provided that an appropriate model is used for numerical simulations. It is further shown that effective diffusion coefficients may be systematically underestimated when the concentration at the downstream boundary is not taken adequately into account in modelling, even for very low concentrations. A criterion is derived for quasi steady-state situations, by which it can be decided whether the simplifying assumption of a zero-concentration at the downstream boundary in through-diffusion is justified or not. The application of the criterion requires, however, knowledge of the effective diffusion coefficient of the clay sample. Such knowledge is often absent or only approximately available during the planning phase of a diffusion experiment. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Criterion-related validity of perceived exertion scales in healthy children: a systematic review and meta-analysis.

    PubMed

    Rodríguez, Iván; Zambrano, Lysien; Manterola, Carlos

    2016-04-01

    Physiological parameters used to measure exercise intensity are oxygen uptake and heart rate. However, perceived exertion (PE) is a scale that has also been frequently applied. The objective of this study is to establish the criterion-related validity of PE scales in children during an incremental exercise test. Seven electronic databases were used. Studies aimed at assessing criterion-related validity of PE scales in healthy children during an incremental exercise test were included. Correlation coefficients were transformed into z-values and assessed in a meta-analysis by means of a fixed effects model if I2 was below 50% or a random effects model, if it was above 50%. wenty-five articles that studied 1418 children (boys: 49.2%) met the inclusion criteria. Children's average age was 10.5 years old. Exercise modalities included bike, running and stepping exercises. The weighted correlation coefficient was 0.835 (95% confidence interval: 0.762-0.887) and 0.874 (95% confidence interval: 0.794-0.924) for heart rate and oxygen uptake as reference criteria. The production paradigm and scales that had not been adapted to children showed the lowest measurement performance (p < 0.05). Measuring PE could be valid in healthy children during an incremental exercise test. Child-specific rating scales showed a better performance than those that had not been adapted to this population. Further studies with better methodological quality should be conducted in order to confirm these results. Sociedad Argentina de Pediatría.

  18. Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing

    NASA Astrophysics Data System (ADS)

    Rabbitt, Christopher

    This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.

  19. On computing Gröbner bases in rings of differential operators

    NASA Astrophysics Data System (ADS)

    Ma, Xiaodong; Sun, Yao; Wang, Dingkang

    2011-05-01

    Insa and Pauer presented a basic theory of Groebner basis for differential operators with coefficients in a commutative ring in 1998, and a criterion was proposed to determine if a set of differential operators is a Groebner basis. In this paper, we will give a new criterion such that Insa and Pauer's criterion could be concluded as a special case and one could compute the Groebner basis more efficiently by this new criterion.

  20. Development of a job stressor scale for nurses caring for patients with intractable neurological diseases.

    PubMed

    Ando, Yukako; Kataoka, Tsuyoshi; Okamura, Hitoshi; Tanaka, Katsutoshi; Kobayashi, Toshio

    2013-12-01

    The purpose of this research is to verify the reliability and validity of a job stressor scale for nurses caring for patients with intractable neurological diseases. A mail survey was conducted using a self-report questionnaire. The subjects were 263 nurses and assistant nurses working in wards specializing in intractable neurological diseases. The response rate was 71.9% (valid response rate, 66.2%). With regard to reliability, internal consistency and stability were assessed. Internal consistency was examined via Cronbach's alpha. For stability, the test-retest method was performed and stability was examined via intraclass correlation coefficients. With regard to validity, factor validity, criterion-related validity, and content validity were assessed. Exploratory factor analysis was used for factor validity. For criterion-related validity, an existing scale was used as an external criterion; concurrent validity was examined via Spearman's rank correlation coefficients. As a result of analysis, there were 26 items in the scale created with an eight factor structure. Cronbach's a for the 26 items was 0.90; with the exception of two factors, alpha for all of the individual sub-factors was high at 0.7 or higher. The intraclass correlation coefficient for the 26 items was 0.89 (p < 0.001). With regard to criterion-related validity, concurrent validity was confirmed and the correlation coefficient with an external criterion was 0.73 (p < 0.001). For content validity, subjects who responded that "The questionnaire represents a stressor well or to a degree" accounted for 81% of the total responses. Reliability and validity were confirmed, so the scale created in the current research is a usable scale.

  1. Failure prediction of thin beryllium sheets used in spacecraft structures

    NASA Technical Reports Server (NTRS)

    Roschke, Paul N.; Mascorro, Edward; Papados, Photios; Serna, Oscar R.

    1991-01-01

    The primary objective of this study is to develop a method for prediction of failure of thin beryllium sheets that undergo complex states of stress. Major components of the research include experimental evaluation of strength parameters for cross-rolled beryllium sheet, application of the Tsai-Wu failure criterion to plate bending problems, development of a high order failure criterion, application of the new criterion to a variety of structures, and incorporation of both failure criteria into a finite element code. A Tsai-Wu failure model for SR-200 sheet material is developed from available tensile data, experiments carried out by NASA on two circular plates, and compression and off-axis experiments performed in this study. The failure surface obtained from the resulting criterion forms an ellipsoid. By supplementing experimental data used in the the two-dimensional criterion and modifying previously suggested failure criteria, a multi-dimensional failure surface is proposed for thin beryllium structures. The new criterion for orthotropic material is represented by a failure surface in six-dimensional stress space. In order to determine coefficients of the governing equation, a number of uniaxial, biaxial, and triaxial experiments are required. Details of these experiments and a complementary ultrasonic investigation are described in detail. Finally, validity of the criterion and newly determined mechanical properties is established through experiments on structures composed of SR200 sheet material. These experiments include a plate-plug arrangement under a complex state of stress and a series of plates with an out-of-plane central point load. Both criteria have been incorporated into a general purpose finite element analysis code. Numerical simulation incrementally applied loads to a structural component that is being designed and checks each nodal point in the model for exceedance of a failure criterion. If stresses at all locations do not exceed the failure criterion, the load is increased and the process is repeated. Failure results for the plate-plug and clamped plate tests are accurate to within 2 percent.

  2. Arithmetical functions and irrationality of Lambert series

    NASA Astrophysics Data System (ADS)

    Duverney, Daniel

    2011-09-01

    We use a method of Erdös in order to prove the linear independence over Q of the numbers 1, ∑ n = 1+∞1/qn2-1, ∑ n = 1+∞n/qn2-1 for every q∈Z, with |q|≥2. The main idea consists in considering the two above series as Lambert series. This allows to expand them as power series of 1/q. The Taylor coefficients of these expansions are arithmetical functions, whose properties allow to apply an elementary irrationality criterion, which yields the result.

  3. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    PubMed

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second degree where the parabola is its graphical representation.

  4. [Establishing and applying of autoregressive integrated moving average model to predict the incidence rate of dysentery in Shanghai].

    PubMed

    Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An

    2010-01-01

    To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.

  5. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  6. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  7. Use of scan overlap redundancy to enhance multispectral aircraft scanner data

    NASA Technical Reports Server (NTRS)

    Lindenlaub, J. C.; Keat, J.

    1973-01-01

    Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.

  8. Multi-resolution analysis for ear recognition using wavelet features

    NASA Astrophysics Data System (ADS)

    Shoaib, M.; Basit, A.; Faye, I.

    2016-11-01

    Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.

  9. Methodology of functionality selection for water management software and examples of its application.

    PubMed

    Vasilyev, K N

    2013-01-01

    When developing new software products and adapting existing software, project leaders have to decide which functionalities to keep, adapt or develop. They have to consider that the cost of making errors during the specification phase is extremely high. In this paper a formalised approach is proposed that considers the main criteria for selecting new software functions. The application of this approach minimises the chances of making errors in selecting the functions to apply. Based on the work on software development and support projects in the area of water resources and flood damage evaluation in economic terms at CH2M HILL (the developers of the flood modelling package ISIS), the author has defined seven criteria for selecting functions to be included in a software product. The approach is based on the evaluation of the relative significance of the functions to be included into the software product. Evaluation is achieved by considering each criterion and the weighting coefficients of each criterion in turn and applying the method of normalisation. This paper includes a description of this new approach and examples of its application in the development of new software products in the are of the water resources management.

  10. Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2007-01-01

    A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…

  11. A determinant-based criterion for working correlation structure selection in generalized estimating equations.

    PubMed

    Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S

    2016-05-20

    In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.

  12. The Validation of a Case-Based, Cumulative Assessment and Progressions Examination

    PubMed Central

    Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David

    2016-01-01

    Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435

  13. Enhancing Security of Double Random Phase Encoding Based on Random S-Box

    NASA Astrophysics Data System (ADS)

    Girija, R.; Singh, Hukum

    2018-06-01

    In this paper, we propose a novel asymmetric cryptosystem for double random phase encoding (DRPE) using random S-Box. While utilising S-Box separately is not reliable and DRPE does not support non-linearity, so, our system unites the effectiveness of S-Box with an asymmetric system of DRPE (through Fourier transform). The uniqueness of proposed cryptosystem lies on employing high sensitivity dynamic S-Box for our DRPE system. The randomness and scalability achieved due to applied technique is an additional feature of the proposed solution. The firmness of random S-Box is investigated in terms of performance parameters such as non-linearity, strict avalanche criterion, bit independence criterion, linear and differential approximation probabilities etc. S-Boxes convey nonlinearity to cryptosystems which is a significant parameter and very essential for DRPE. The strength of proposed cryptosystem has been analysed using various parameters such as MSE, PSNR, correlation coefficient analysis, noise analysis, SVD analysis, etc. Experimental results are conferred in detail to exhibit proposed cryptosystem is highly secure.

  14. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  15. An experimental study on the noise correlation properties of CBCT projection data

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ouyang, Luo; Ma, Jianhua; Huang, Jing; Chen, Wufan; Wang, Jing

    2014-03-01

    In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at three fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation results in a lower noise level as compared to the PWLS criterion without considering the noise correlation at the matched resolution.

  16. A robust holographic autofocusing criterion based on edge sparsity: comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    NASA Astrophysics Data System (ADS)

    Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan

    2018-02-01

    The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.

  17. Development of a fuzzy-stochastic programming with Green Z-score criterion method for planning water resources systems with a trading mechanism.

    PubMed

    Zeng, X T; Huang, G H; Li, Y P; Zhang, J L; Cai, Y P; Liu, Z P; Liu, L R

    2016-12-01

    This study developed a fuzzy-stochastic programming with Green Z-score criterion (FSGZ) method for water resources allocation and water quality management with a trading-mechanism (WAQT) under uncertainties. FSGZ can handle uncertainties expressed as probability distributions, and it can also quantify objective/subjective fuzziness in the decision-making process. Risk-averse attitudes and robustness coefficient are joined to express the relationship between the expected target and outcome under various risk preferences of decision makers and systemic robustness. The developed method is applied to a real-world case of WAQT in the Kaidu-Kongque River Basin in northwest China, where an effective mechanism (e.g., market trading) to simultaneously confront severely diminished water availability and degraded water quality is required. Results of water transaction amounts, water allocation patterns, pollution mitigation schemes, and system benefits under various scenarios are analyzed, which indicate that a trading-mechanism is a more sustainable method to manage water-environment crisis in the study region. Additionally, consideration of anthropogenic (e.g., a risk-averse attitude) and systemic factors (e.g., the robustness coefficient) can support the generation of a robust plan associated with risk control for WAQT when uncertainty is present. These findings assist local policy and decision makers to gain insights into water-environment capacity planning to balance the basin's social and economic growth with protecting the region's ecosystems.

  18. Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Volden, Thomas R.

    2012-01-01

    An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.

  19. [Acoustic conditions in open plan offices - Pilot test results].

    PubMed

    Mikulski, Witold

    The main source of noise in open plan office are conversations. Office work standards in such premises are attained by applying specific acoustic adaptation. This article presents the results of pilot tests and acoustic evaluation of open space rooms. Acoustic properties of 6 open plan office rooms were the subject of the tests. Evaluation parameters, measurement methods and criterial values were adopted according to the following standards: PN-EN ISO 3382- 3:2012, PN-EN ISO 3382-2:2010, PN-B-02151-4:2015-06 and PN-B-02151-3:2015-10. The reverberation time was 0.33- 0.55 s (maximum permissible value in offices - 0.6 s; the criterion was met), sound absorption coefficient in relation to 1 m2 of the room's plan was 0.77-1.58 m2 (minimum permissible value - 1.1 m2; 2 out of 6 rooms met the criterion), distraction distance was 8.5-14 m (maximum permissible value - 5 m; none of the rooms met the criterion), A-weighted sound pressure level of speech at a distance of 4 m was 43.8-54.7 dB (maximum permissible value - 48 dB; 2 out of 6 rooms met the criterion), spatial decay rate of the speech was 1.8-6.3 dB (minimum permissible value - 7 dB; none of the rooms met the criterion). Standard acoustic treatment, containing sound absorbing suspended ceiling, sound absorbing materials on the walls, carpet flooring and sound absorbing workplace barriers, is not sufficient. These rooms require specific advanced acoustic solutions. Med Pr 2016;67(5):653-662. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  20. Molecular radiotherapy: the NUKFIT software for calculating the time-integrated activity coefficient.

    PubMed

    Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G

    2013-10-01

    Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.

  1. Influences on Academic Achievement Across High and Low Income Countries: A Re-Analysis of IEA Data.

    ERIC Educational Resources Information Center

    Heyneman, S.; Loxley, W.

    Previous international studies of science achievement put the data through a process of winnowing to decide which variables to keep in the final regressions. Variables were allowed to enter the final regressions if they met a minimum beta coefficient criterion of 0.05 averaged across rich and poor countries alike. The criterion was an average…

  2. Criterion for evaluating the predictive ability of nonlinear regression models without cross-validation.

    PubMed

    Kaneko, Hiromasa; Funatsu, Kimito

    2013-09-23

    We propose predictive performance criteria for nonlinear regression models without cross-validation. The proposed criteria are the determination coefficient and the root-mean-square error for the midpoints between k-nearest-neighbor data points. These criteria can be used to evaluate predictive ability after the regression models are updated, whereas cross-validation cannot be performed in such a situation. The proposed method is effective and helpful in handling big data when cross-validation cannot be applied. By analyzing data from numerical simulations and quantitative structural relationships, we confirm that the proposed criteria enable the predictive ability of the nonlinear regression models to be appropriately quantified.

  3. Decomposing Time Series Data by a Non-negative Matrix Factorization Algorithm with Temporally Constrained Coefficients

    PubMed Central

    Cheung, Vincent C. K.; Devarajan, Karthik; Severini, Giacomo; Turolla, Andrea; Bonato, Paolo

    2017-01-01

    The non-negative matrix factorization algorithm (NMF) decomposes a data matrix into a set of non-negative basis vectors, each scaled by a coefficient. In its original formulation, the NMF assumes the data samples and dimensions to be independently distributed, making it a less-than-ideal algorithm for the analysis of time series data with temporal correlations. Here, we seek to derive an NMF that accounts for temporal dependencies in the data by explicitly incorporating a very simple temporal constraint for the coefficients into the NMF update rules. We applied the modified algorithm to 2 multi-dimensional electromyographic data sets collected from the human upper-limb to identify muscle synergies. We found that because it reduced the number of free parameters in the model, our modified NMF made it possible to use the Akaike Information Criterion to objectively identify a model order (i.e., the number of muscle synergies composing the data) that is more functionally interpretable, and closer to the numbers previously determined using ad hoc measures. PMID:26737046

  4. Detection and recognition of targets by using signal polarization properties

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Peralta-Fabi, Ricardo; Popov, Anatoly V.; Babakov, Mikhail F.

    1999-08-01

    The quality of radar target recognition can be enhanced by exploiting its polarization signatures. A specialized X-band polarimetric radar was used for target recognition in experimental investigations. The following polarization characteristics connected to the object geometrical properties were investigated: the amplitudes of the polarization matrix elements; an anisotropy coefficient; depolarization coefficient; asymmetry coefficient; the energy of a backscattering signal; object shape factor. A large quantity of polarimetric radar data was measured and processed to form a database of different object and different weather conditions. The histograms of polarization signatures were approximated by a Nakagami distribution, then used for real- time target recognition. The Neyman-Pearson criterion was used for the target detection, and the criterion of the maximum of a posterior probability was used for recognition problem. Some results of experimental verification of pattern recognition and detection of objects with different electrophysical and geometrical characteristics urban in clutter are presented in this paper.

  5. Strength-based criterion shifts in recognition memory.

    PubMed

    Singer, Murray

    2009-10-01

    In manipulations of stimulus strength between lists, a more lenient signal detection criterion is more frequently applied to a weak than to a strong stimulus class. However, with randomly intermixed weak and strong test probes, such a criterion shift often does not result. A procedure that has yielded delay-based within-list criterion shifts was applied to strength manipulations in recognition memory for categorized word lists. When participants made semantic ratings about each stimulus word, strength-based criterion shifts emerged regardless of whether words from pairs of categories were studied in separate blocks (Experiment 1) or in intermixed blocks (Experiment 2). In Experiment 3, the criterion shift persisted under the semantic-rating study task, but not under rote memorization. These findings suggest that continually adjusting the recognition decision criterion is cognitively feasible. They provide a technique for manipulating the criterion shift, and they identify competing theoretical accounts of these effects.

  6. [Electrophoretic patterns of cell wall protein as a criterion for the identification and classification of Corynebacteria].

    PubMed

    Mykhal's'kyĭ, L O; Furtat, I M; Dem'ianenko, F P; Kostiuchyk, A A

    2001-01-01

    Electrophoretic patterns of cell wall protein of three industrial strains, that were used for production of lysin, and eight collection strains from the genus Corynevacterium were studied to analyze their similarity as well as to estimate an opportunity of using this parameter as an additional criterion for identification and classification of corynebacteria. Similarity coefficient of cell wall overall and main protein electrophoretic patterns were determined by a specially created computer program. Electrophoretic analysis showed that every specie had an individual protein profile. There were determined biopolymers common for the specie, genus and individual among the overall majors and minors. The obtained results showed, that the patterns of main proteins were more conservative and informative in comparison with those ones of overall proteins. The definition of similarity coefficient by the main protein patterns has correlated with the protein profile characteristics of every analyzed strain, and it managed to distribute them into the separate groups. The similarity coefficient of preparations by the main protein patterns allows to separate one specie or a strain from another, and that gives us a chance to claim that this parameter could be used as an additional criterion for differentiation and referring the corynebacteria to a certain taxonomic group.

  7. An Empirical Model Building Criterion Based on Prediction with Applications in Parametric Cost Estimation.

    DTIC Science & Technology

    1980-08-01

    varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of

  8. Development and validation of a tool to evaluate the quality of medical education websites in pathology.

    PubMed

    Alyusuf, Raja H; Prasad, Kameshwar; Abdel Satir, Ali M; Abalkhail, Ali A; Arora, Roopa K

    2013-01-01

    The exponential use of the internet as a learning resource coupled with varied quality of many websites, lead to a need to identify suitable websites for teaching purposes. The aim of this study is to develop and to validate a tool, which evaluates the quality of undergraduate medical educational websites; and apply it to the field of pathology. A tool was devised through several steps of item generation, reduction, weightage, pilot testing, post-pilot modification of the tool and validating the tool. Tool validation included measurement of inter-observer reliability; and generation of criterion related, construct related and content related validity. The validated tool was subsequently tested by applying it to a population of pathology websites. Reliability testing showed a high internal consistency reliability (Cronbach's alpha = 0.92), high inter-observer reliability (Pearson's correlation r = 0.88), intraclass correlation coefficient = 0.85 and κ =0.75. It showed high criterion related, construct related and content related validity. The tool showed moderately high concordance with the gold standard (κ =0.61); 92.2% sensitivity, 67.8% specificity, 75.6% positive predictive value and 88.9% negative predictive value. The validated tool was applied to 278 websites; 29.9% were rated as recommended, 41.0% as recommended with caution and 29.1% as not recommended. A systematic tool was devised to evaluate the quality of websites for medical educational purposes. The tool was shown to yield reliable and valid inferences through its application to pathology websites.

  9. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    PubMed

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  10. A Correction Equation for Jump Height Measured Using the Just Jump System.

    PubMed

    McMahon, John J; Jones, Paul A; Comfort, Paul

    2016-05-01

    To determine the concurrent validity and reliability of the popular Just Jump system (JJS) for determining jump height and, if necessary, provide a correction equation for future reference. Eighteen male college athletes performed 3 bilateral countermovement jumps (CMJs) on 2 JJSs (alternative method) that were placed on top of a force platform (criterion method). Two JJSs were used to establish consistency between systems. Jump height was calculated from flight time obtained from the JJS and force platform. Intraclass correlation coefficients (ICCs) demonstrated excellent within-session reliability of the CMJ height measurement derived from both the JJS (ICC = .96, P < .001) and the force platform (ICC = .96, P < .001). Dependent t tests revealed that the JJS yielded a significantly greater CMJ jump height (0.46 ± 0.09 m vs 0.33 ± 0.08 m) than the force platform (P < .001, Cohen d = 1.39, power = 1.00). There was, however, an excellent relationship between CMJ heights derived from the JJS and force platform (r = .998, P < .001, power = 1.00), with a coefficient of determination (R2) of .995. Therefore, the following correction equation was produced: Criterion jump height = (0.8747 × alternative jump height) - 0.0666. The JJS provides a reliable but overestimated measure of jump height. It is suggested, therefore, that practitioners who use the JJS as part of future work apply the correction equation presented in this study to resultant jump-height values.

  11. Nuclear norm-based 2-DPCA for extracting features from images.

    PubMed

    Zhang, Fanlong; Yang, Jian; Qian, Jianjun; Xu, Yong

    2015-10-01

    The 2-D principal component analysis (2-DPCA) is a widely used method for image feature extraction. However, it can be equivalently implemented via image-row-based principal component analysis. This paper presents a structured 2-D method called nuclear norm-based 2-DPCA (N-2-DPCA), which uses a nuclear norm-based reconstruction error criterion. The nuclear norm is a matrix norm, which can provide a structured 2-D characterization for the reconstruction error image. The reconstruction error criterion is minimized by converting the nuclear norm-based optimization problem into a series of F-norm-based optimization problems. In addition, N-2-DPCA is extended to a bilateral projection-based N-2-DPCA (N-B2-DPCA). The virtue of N-B2-DPCA over N-2-DPCA is that an image can be represented with fewer coefficients. N-2-DPCA and N-B2-DPCA are applied to face recognition and reconstruction and evaluated using the Extended Yale B, CMU PIE, FRGC, and AR databases. Experimental results demonstrate the effectiveness of the proposed methods.

  12. Criterion and content validity of a novel structured haggling contingent valuation question format versus the bidding game and binary with follow-up format.

    PubMed

    Onwujekwe, Obinna

    2004-02-01

    Contingent valuation question formats that will be used to elicit willingness to pay for goods and services need to be relevant to the area they will be used in order for responses to be valid. A novel contingent valuation question format called the "structured haggling technique" (SH) that resembles the bargaining system in Nigerian markets was designed and its criterion and content validity compared with those of the bidding game (BG) and binary-with-follow-up (BWFU) technique. This was achieved by determining the willingness to pay (WTP) for insecticide-treated nets (ITNs) in Southeast Nigeria. Content validity was determined through observation of actual trading of untreated nets together with interviews with sellers and consumers. Criterion validity was determined by comparing stated and actual WTP. Stated WTP was determined using a questionnaire administered to 810 household heads and actual WTP was determined by offering the nets for sale to all respondents one month later. The phi (correlation) coefficient was used to compare criterion validity across question formats. The phi coefficients were SH (0.60: 95% C.I. 0.50-0.71), BG (0.42: 95% C.I. 0.29-0.54) and the BWFU (0.32: 95% C.I. 0.20-0.44), implying that the BG and SH had similar levels of criterion-validity while the BWFU was the least criterion-valid. However, the SH was the most content-valid. It is necessary to validate the findings in other areas where haggling is common. Future studies should establish the content validity of question formats in the contexts in which they will be used before administering questionnaires.

  13. Statistical Analysis of the Uncertainty in Pre-Flight Aerodynamic Database of a Hypersonic Vehicle

    NASA Astrophysics Data System (ADS)

    Huh, Lynn

    The objective of the present research was to develop a new method to derive the aerodynamic coefficients and the associated uncertainties for flight vehicles via post- flight inertial navigation analysis using data from the inertial measurement unit. Statistical estimates of vehicle state and aerodynamic coefficients are derived using Monte Carlo simulation. Trajectory reconstruction using the inertial navigation system (INS) is a simple and well used method. However, deriving realistic uncertainties in the reconstructed state and any associated parameters is not so straight forward. Extended Kalman filters, batch minimum variance estimation and other approaches have been used. However, these methods generally depend on assumed physical models, assumed statistical distributions (usually Gaussian) or have convergence issues for non-linear problems. The approach here assumes no physical models, is applicable to any statistical distribution, and does not have any convergence issues. The new approach obtains the statistics directly from a sufficient number of Monte Carlo samples using only the generally well known gyro and accelerometer specifications and could be applied to the systems of non-linear form and non-Gaussian distribution. When redundant data are available, the set of Monte Carlo simulations are constrained to satisfy the redundant data within the uncertainties specified for the additional data. The proposed method was applied to validate the uncertainty in the pre-flight aerodynamic database of the X-43A Hyper-X research vehicle. In addition to gyro and acceleration data, the actual flight data include redundant measurements of position and velocity from the global positioning system (GPS). The criteria derived from the blend of the GPS and INS accuracy was used to select valid trajectories for statistical analysis. The aerodynamic coefficients were derived from the selected trajectories by either direct extraction method based on the equations in dynamics, or by the inquiry of the pre-flight aerodynamic database. After the application of the proposed method to the case of the X-43A Hyper-X research vehicle, it was found that 1) there were consistent differences in the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis, 2) the pre-flight estimation of the pitching moment coefficients was significantly different from the post-flight analysis, 3) the type of distribution of the states from the Monte Carlo simulation were affected by that of the perturbation parameters, 4) the uncertainties in the pre-flight model were overestimated, 5) the range where the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis are in closest agreement is between Mach *.* and *.* and more data points may be needed between Mach * and ** in the pre-flight aerodynamic database, 6) selection criterion for valid trajectories from the Monte Carlo simulations was mostly driven by the horizontal velocity error, 7) the selection criterion must be based on reasonable model to ensure the validity of the statistics from the proposed method, and 8) the results from the proposed method applied to the two different flights with the identical geometry and similar flight profile were consistent.

  14. Prediction of Hot Tearing Using a Dimensionless Niyama Criterion

    NASA Astrophysics Data System (ADS)

    Monroe, Charles; Beckermann, Christoph

    2014-08-01

    The dimensionless form of the well-known Niyama criterion is extended to include the effect of applied strain. Under applied tensile strain, the pressure drop in the mushy zone is enhanced and pores grow beyond typical shrinkage porosity without deformation. This porosity growth can be expected to align perpendicular to the applied strain and to contribute to hot tearing. A model to capture this coupled effect of solidification shrinkage and applied strain on the mushy zone is derived. The dimensionless Niyama criterion can be used to determine the critical liquid fraction value below which porosity forms. This critical value is a function of alloy properties, solidification conditions, and strain rate. Once a dimensionless Niyama criterion value is obtained from thermal and mechanical simulation results, the corresponding shrinkage and deformation pore volume fractions can be calculated. The novelty of the proposed method lies in using the critical liquid fraction at the critical pressure drop within the mushy zone to determine the onset of hot tearing. The magnitude of pore growth due to shrinkage and deformation is plotted as a function of the dimensionless Niyama criterion for an Al-Cu alloy as an example. Furthermore, a typical hot tear "lambda"-shaped curve showing deformation pore volume as a function of alloy content is produced for two Niyama criterion values.

  15. Bayesian dynamic modeling of time series of dengue disease case counts

    PubMed Central

    López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-01-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941

  16. Assessing sensitivity to change: choosing the appropriate change coefficient.

    PubMed

    Stratford, Paul W; Riddle, Daniel L

    2005-04-05

    The past 20-years have seen the development and evaluation of many health status measures. Unlike the high standards demanded of those who conduct and report clinical intervention trials, the methodological rigor for studies examining the sensitivity to change of health status measures are less demanding. It is likely that the absence of a criterion standard for change in health status contributes to this shortcoming. To increase confidence in the results of these types of studies investigators have often calculated multiple change coefficients for the same patient sample. The purpose of this report is to identify the conflict that arises when multiple change coefficients are applied to the same patient sample. Three families of change coefficients based on different assumptions concerning the sample composition are identified: (1) the sample is homogeneous with respect to change; (2) subgroups of patients who truly change by different amounts exist; (3) individual patients, many of whom truly change by different amounts exist. We present several analyses which illustrate a major conceptual conflict: the signal (a measure's true ability to detect change) for some of these coefficients appears in the noise term (measurement error) of the others. We speculate that this dilemma occurs as a result of insufficient preparatory work such as pilot studies to establish the likely change characteristic of the patient population of interest. Uncertainty in the choice of change coefficient could be overcome by conducting pilot studies to ascertain the likely change characteristic of the population of interest. Once the population's change characteristic is identified, the choice of change coefficient should be clear.

  17. Intervention criterion and control research for active front steering with consideration of road adhesion

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojian; Zhou, Bing; Wen, Guilin; Long, Lefei; Cui, Qingjia

    2018-04-01

    A multi-objective active front steering (AFS) control system considering the road adhesion constraint on vehicle stability is developed using the sliding mode control (SMC) method. First, an identification function combined with the relationship between the yaw rate and the steering angle is developed to determine whether the tyre state is linear or nonlinear. On this basis, an intervention criterion for the AFS system is proposed to improve vehicle handling and stability in emergent conditions. A sideslip angle stability domain enveloped by the upper, lower, left, and right boundaries, as well as the constraint of road adhesion coefficient, is constructed based on the ? phase-plane method. A dynamic weighting coefficient to coordinate the control of yaw rate and sideslip angle, and a control strategy that considers changing control objectives based on the desired yaw rate, the desired sideslip angle, and their proportional weights, are proposed for the SMC controller. Because road adhesion has a significant effect on vehicle stability and to meet the control algorithm's requirement of real-time access to vehicle states, a unscented Kalman filter-based state observer is proposed to estimate the adhesion coefficient and the required states. Finally, simulations are performed using high and low road adhesion conditions in a Matlab/Simulink environment, and the results show that the proposed AFS control system promptly intervenes according to the intervention criterion, effectively improving vehicle handling and stability.

  18. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  19. Elastic models: a comparative study applied to retinal images.

    PubMed

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  20. The Impact of Age on Quality Measure Adherence in Colon Cancer

    PubMed Central

    Steele, Scott R.; Chen, Steven L.; Stojadinovic, Alexander; Nissan, Aviram; Zhu, Kangmin; Peoples, George E.; Bilchik, Anton

    2012-01-01

    BACKGROUND Recently lymph node yield (LNY) has been endorsed as a quality measure of CC resection adequacy. It is unclear whether this measure is relevant to all ages. We hypothesized that total lymph node yield (LNY) is negatively correlated with increasing age and overall survival (OS). STUDY DESIGN The Surveillance, Epidemiology and End Results (SEER) database was queried for all non-metastatic CC patients diagnosed from 1992–2004 (n=101,767), grouped by age (<40, 41–45, 46–50, and in 5-year increments until 86+ years). Proportions of patients meeting the 12 LNY minimum criterion were determined in each age group, and analyzed with multivariate linear regression adjusting for demographics and AJCC 6th Edition stage. Overall survival (OS) comparisons in each age category were based on the guideline of 12 LNY. RESULTS Mean LNY decreased with increasing age (18.7 vs. 11.4 nodes/patient, youngest vs. oldest group, P<0.001). The proportion of patients meeting the 12 LNY criterion also declined with each incremental age group (61.9% vs. 35.2% compliance, youngest vs. oldest, P<0.001). Multivariate regression demonstrated a negative effect of each additional year in age and log (LNY) with coefficient of −0.003 (95% CI −0.003 to −0.002). When stratified by age and nodal yield using the 12 LNY criterion, OS was lower for all age groups in Stage II CC with <12LNY, and each age group over 60 years with <12LNY for Stage III CC (P<0.05). CONCLUSIONS Every attempt to adhere to proper oncological principles should be made at time of CC resection regardless of age. The prognostic significance of the 12 LN minimum criterion should be applied even to elderly CC patients. PMID:21601492

  1. Development and validation of a tool to evaluate the quality of medical education websites in pathology

    PubMed Central

    Alyusuf, Raja H.; Prasad, Kameshwar; Abdel Satir, Ali M.; Abalkhail, Ali A.; Arora, Roopa K.

    2013-01-01

    Background: The exponential use of the internet as a learning resource coupled with varied quality of many websites, lead to a need to identify suitable websites for teaching purposes. Aim: The aim of this study is to develop and to validate a tool, which evaluates the quality of undergraduate medical educational websites; and apply it to the field of pathology. Methods: A tool was devised through several steps of item generation, reduction, weightage, pilot testing, post-pilot modification of the tool and validating the tool. Tool validation included measurement of inter-observer reliability; and generation of criterion related, construct related and content related validity. The validated tool was subsequently tested by applying it to a population of pathology websites. Results and Discussion: Reliability testing showed a high internal consistency reliability (Cronbach's alpha = 0.92), high inter-observer reliability (Pearson's correlation r = 0.88), intraclass correlation coefficient = 0.85 and κ =0.75. It showed high criterion related, construct related and content related validity. The tool showed moderately high concordance with the gold standard (κ =0.61); 92.2% sensitivity, 67.8% specificity, 75.6% positive predictive value and 88.9% negative predictive value. The validated tool was applied to 278 websites; 29.9% were rated as recommended, 41.0% as recommended with caution and 29.1% as not recommended. Conclusion: A systematic tool was devised to evaluate the quality of websites for medical educational purposes. The tool was shown to yield reliable and valid inferences through its application to pathology websites. PMID:24392243

  2. Recent Innovations in the Changing Criterion Design: Implications for Research and Practice in Special Education

    ERIC Educational Resources Information Center

    McDougall, Dennis; Hawkins, Jacqueline; Brady, Michael; Jenkins, Amelia

    2006-01-01

    This article illustrates (a) 2 recent innovations in the changing criterion research design, (b) how these innovations apply to research and practice in special education, and (c) how clinical needs influence design features of the changing criterion design. The first innovation, the range-bound changing criterion, is a very simple variation of…

  3. Fast and automatic algorithm for optic disc extraction in retinal images using principle-component-analysis-based preprocessing and curvelet transform.

    PubMed

    Shahbeig, Saleh; Pourghassem, Hossein

    2013-01-01

    Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.

  4. An analysis of the symmetry issue in the ℓ-distribution method of gas radiation in non-uniform gaseous media

    NASA Astrophysics Data System (ADS)

    André, Frédéric

    2017-03-01

    The recently proposed ℓ-distribution/ICE (Iterative Copula Evaluation) method of gas radiation suffers from symmetry issues when applied in highly non-isothermal and non-homogeneous gaseous media. This problem is studied in a detailed theoretical way. The objective of the present paper is: 1/to provide a mathematical analysis of this problem of symmetry and, 2/to suggest a decisive factor, defined in terms of the ratio between the narrow band Planck and Rosseland mean absorption coefficients, to handle this issue. Comparisons of model predictions with reference LBL calculations show that the proposed criterion improves the accuracy of the intuitive ICE method for applications in highly non-uniform gases at high temperatures.

  5. Cox Regression Models with Functional Covariates for Survival Data.

    PubMed

    Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M

    2015-06-01

    We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge and daily measures of disease severity collected in the intensive care unit, among survivors of acute respiratory distress syndrome.

  6. Core-log integration for rock mechanics using borehole breakouts and rock strength experiments: Recent results from plate subduction margins

    NASA Astrophysics Data System (ADS)

    Saito, S.; Lin, W.

    2014-12-01

    Core-log integration has been applied for rock mechanics studies in scientific ocean drilling since 2007 in plate subduction margins such as Nankai Trough, Costa Rica margin, and Japan Trench. State of stress in subduction wedge is essential for controlling dynamics of plate boundary fault. One of the common methods to estimate stress state is analysis of borehole breakouts (drilling induced borehole wall compressive failures) recorded in borehole image logs to determine the maximum horizontal principal stress orientation. Borehole breakouts can also yield possible range of stress magnitude based on a rock compressive strength criterion. In this study, we constrained the stress magnitudes based on two different rock failure criteria, the Mohr-Coulomb (MC) criteria and the modified Wiebols-Cook (mWC) criteria. As the MC criterion is the same as that under unconfined compression state, only one rock parameter, unconfined compressive strength (UCS) is needed to constrain stress magnitudes. The mWC criterion needs the UCS, Poisson's ratio and internal frictional coefficient determined by triaxial compression experiments to take the intermediate principal stress effects on rock strength into consideration. We conducted various strength experiments on samples taken during IODP Expeditions 334/344 (Costa Rica Seismogenesis Project) to evaluate reliable method to estimate stress magnitudes. Our results show that the effects of the intermediate principal stress on the rock compressive failure occurred on a borehole wall is not negligible.

  7. Entropic criterion for model selection

    NASA Astrophysics Data System (ADS)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  8. Tribological Behavior and the Mild–Severe Wear Transition of Mg97Zn1Y2 Alloy with a LPSO Structure Phase

    PubMed Central

    Sun, Wei; Xuan, Xihua; Li, Liang; An, Jian

    2018-01-01

    Dry friction and wear tests were performed on as-cast Mg97Zn1Y2 alloy using a pin-on-disc configuration. Coefficients of friction and wear rates were measured as a function of applied load at sliding speeds of 0.2, 0.8 and 3.0 m/s. The wear mechanisms were identified in the mild and severe wear regimes by means of morphological observation and composition analysis of worn surfaces using scanning electron microscope (SEM) and energy dispersive X-ray spectrometer (EDS). Analyses of microstructure and hardness changes in subsurfaces verified the microstructure transformation from the deformed to the dynamically recrystallized, and properties changed from the strain hardening to dynamic crystallization (DRX) softening before and after the mild–severe wear transition. The mild–severe wear transition can be determined by a proposed contact surface DRX temperature criterion, from which the critical DRX temperatures at different sliding speeds are calculated using DRX dynamics; hence transition loads can also be calculated using a transition load model. The calculated transition loads are in good agreement with the measured ones, demonstrating the validity and applicability of the contact surface DRX temperature criterion. PMID:29584692

  9. The test-retest reliability and criterion validity of a high-intensity, netball-specific circuit test: The Net-Test.

    PubMed

    Mungovan, Sean F; Peralta, Paula J; Gass, Gregory C; Scanlan, Aaron T

    2018-04-12

    To examine the test-retest reliability and criterion validity of a high-intensity, netball-specific fitness test. Repeated measures, within-subject design. Eighteen female netball players competing in an international competition completed a trial of the Net-Test, which consists of 14 timed netball-specific movements. Players also completed a series of netball-relevant criterion fitness tests. Ten players completed an additional Net-Test trial one week later to assess test-retest reliability using intraclass correlation coefficient (ICC), typical error of measurement (TEM), and coefficient of variation (CV). The typical error of estimate expressed as CV and Pearson correlations were calculated between each criterion test and Net-Test performance to assess criterion validity. Five movements during the Net-Test displayed moderate ICC (0.84-0.90) and two movements displayed high ICC (0.91-0.93). Seven movements and heart rate taken during the Net-Test held low CV (<5%) with values ranging from 1.7 to 9.5% across measures. Total time (41.63±2.05s) during the Net-Test possessed low CV and significant (p<0.05) correlations with 10m sprint time (1.98±0.12s; CV=4.4%, r=0.72), 20m sprint time (3.38±0.19s; CV=3.9%, r=0.79), 505 Change-of-Direction time (2.47±0.08s; CV=2.0%, r=0.80); and maximum oxygen uptake (46.59±2.58 mLkg -1 min -1 ; CV=4.5%, r=-0.66). The Net-Test possesses acceptable reliability for the assessment of netball fitness. Further, the high criterion validity for the Net-Test suggests a range of important netball-specific fitness elements are assessed in combination. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  10. Interspecies quantitative structure-activity relationships (QSARs) for eco-toxicity screening of chemicals: the role of physicochemical properties.

    PubMed

    Furuhama, A; Hasunuma, K; Aoki, Y

    2015-01-01

    In addition to molecular structure profiles, descriptors based on physicochemical properties are useful for explaining the eco-toxicities of chemicals. In a previous study we reported that a criterion based on the difference between the partition coefficient (log POW) and distribution coefficient (log D) values of chemicals enabled us to identify aromatic amines and phenols for which interspecies relationships with strong correlations could be developed for fish-daphnid and algal-daphnid toxicities. The chemicals that met the log D-based criterion were expected to have similar toxicity mechanisms (related to membrane penetration). Here, we investigated the applicability of log D-based criteria to the eco-toxicity of other kinds of chemicals, including aliphatic compounds. At pH 10, use of a log POW - log D > 0 criterion and omission of outliers resulted in the selection of more than 100 chemicals whose acute fish toxicities or algal growth inhibition toxicities were almost equal to their acute daphnid toxicities. The advantage of log D-based criteria is that they allow for simple, rapid screening and prioritizing of chemicals. However, inorganic molecules and chemicals containing certain structural elements cannot be evaluated, because calculated log D values are unavailable.

  11. The Concept of Performance Levels in Criterion-Referenced Assessment.

    ERIC Educational Resources Information Center

    Hewitson, Mal

    The concept of performance levels in criterion-referenced assessment is explored by applying the idea to different types of tests commonly used in schools, mastery tests (including diagnostic tests) and achievement tests. In mastery tests, a threshold performance standard must be established for each criterion. Attainment of this threshold…

  12. H-, He-like recombination spectra - II. l-changing collisions for He Rydberg states

    NASA Astrophysics Data System (ADS)

    Guzmán, F.; Badnell, N. R.; Williams, R. J. R.; van Hoof, P. A. M.; Chatzikos, M.; Ferland, G. J.

    2017-01-01

    Cosmological models can be constrained by determining primordial abundances. Accurate predictions of the He I spectrum are needed to determine the primordial helium abundance to a precision of <1 per cent in order to constrain big bang nucleosynthesis models. Theoretical line emissivities at least this accurate are needed if this precision is to be achieved. In the first paper of this series, which focused on H I, we showed that differences in l-changing collisional rate coefficients predicted by three different theories can translate into 10 per cent changes in predictions for H I spectra. Here, we consider the more complicated case of He atoms, where low-l subshells are not energy degenerate. A criterion for deciding when the energy separation between l subshells is small enough to apply energy-degenerate collisional theories is given. Moreover, for certain conditions, the Bethe approximation originally proposed by Pengelly & Seaton is not sufficiently accurate. We introduce a simple modification of this theory which leads to rate coefficients which agree well with those obtained from pure quantal calculations using the approach of Vrinceanu et al. We show that the l-changing rate coefficients from the different theoretical approaches lead to differences of ˜10 per cent in He I emissivities in simulations of H II regions using spectral code CLOUDY.

  13. Linear growth of the Kelvin-Helmholtz instability with an adiabatic cosmic-ray gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Akihiro; Takahashi, Hiroyuki R.; Kudoh, Takahiro

    2014-06-01

    We investigate effects of cosmic rays on the linear growth of the Kelvin-Helmholtz instability. Cosmic rays are treated as an adiabatic gas and allowed to diffuse along magnetic field lines. We calculated the dispersion relation of the instability for various sets of two free parameters, the ratio of the cosmic-ray pressure to the thermal gas pressure, and the diffusion coefficient. Including cosmic-ray effects, a shear layer is more destabilized and the growth rates can be enhanced in comparison with the ideal magnetohydrodynamical case. Whether the growth rate is effectively enhanced or not depends on the diffusion coefficient of cosmic rays.more » We obtain the criterion for effective enhancement by comparing the growing timescale of the instability with the diffusion timescale of cosmic rays. These results can be applied to various astrophysical phenomena where a velocity shear is present, such as outflows from star-forming galaxies, active galactic nucleus jet, channel flows resulting from the nonlinear development of the magnetorotational instability, and galactic disks.« less

  14. Aerodynamic characteristics of the 10-percent-thick NASA supercritical airfoil 33 designed for a normal-force coefficient of 0.7

    NASA Technical Reports Server (NTRS)

    Harris, C. D.

    1975-01-01

    A 10-percent-thick supercritical airfoil based on an off-design sonic-pressure plateau criterion was developed and experimental aerodynamic characteristics measured. The airfoil had a design normal-force coefficient of 0.7 and was identified as supercritical airfoil 33. Results show the airfoil to have good drag rise characteristics over a wide range of normal-force coefficients with no measurable shock losses up to the Mach numbers at which drag divergence occurred for normal-force coefficients up to 0.7. Comparisons of experimental and theoretical characteristics were made and composite drag rise characteristics were derived for normal-force coefficients of 0.5 and 0.7 and a Reynolds number of 40 million.

  15. Synchronization in oscillator networks with delayed coupling: a stability criterion.

    PubMed

    Earl, Matthew G; Strogatz, Steven H

    2003-03-01

    We derive a stability criterion for the synchronous state in networks of identical phase oscillators with delayed coupling. The criterion applies to any network (whether regular or random, low dimensional or high dimensional, directed or undirected) in which each oscillator receives delayed signals from k others, where k is uniform for all oscillators.

  16. A Review of the CTOA/CTOD Fracture Criterion: Why it Works

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.; James, M. A.

    2001-01-01

    The CTOA/CTOD fracture criterion is one of the oldest fracture criteria applied to fracture of metallic materials with cracks. During the past two decades, the use of elastic-plastic finite-element analyses to simulate fracture of laboratory specimens and structural components using the CTOA criterion has expanded rapidly. But the early applications were restricted to two-dimensional analyses, assuming either plane-stress or plane-strain behavior, which lead to generally non-constant values of CTOA, especially in the early stages crack extension. Later, the non-constant CTOA values were traced to inappropriate state-of-stress (or constraint) assumptions in the crack-front region and severe crack tunneling in thin-sheet materials. More recently, the CTOA fracture criterion has been used with three-dimensional analyses to study constraint effects, crack tunneling, and the fracture process. The constant CTOA criterion (from crack initiation to failure) has been successfully applied to numerous structural applications, such as aircraft fuselages and pipelines. But why does the "constant CTOA" fracture criterion work so well? This paper reviews the results from several studies, discusses the issues of why CTOA works, and discusses its limitations.

  17. Estimating activity energy expenditure: how valid are physical activity questionnaires?

    PubMed

    Neilson, Heather K; Robson, Paula J; Friedenreich, Christine M; Csizmadi, Ilona

    2008-02-01

    Activity energy expenditure (AEE) is the modifiable component of total energy expenditure (TEE) derived from all activities, both volitional and nonvolitional. Because AEE may affect health, there is interest in its estimation in free-living people. Physical activity questionnaires (PAQs) could be a feasible approach to AEE estimation in large populations, but it is unclear whether or not any PAQ is valid for this purpose. Our aim was to explore the validity of existing PAQs for estimating usual AEE in adults, using doubly labeled water (DLW) as a criterion measure. We reviewed 20 publications that described PAQ-to-DLW comparisons, summarized study design factors, and appraised criterion validity using mean differences (AEE(PAQ) - AEE(DLW), or TEE(PAQ) - TEE(DLW)), 95% limits of agreement, and correlation coefficients (AEE(PAQ) versus AEE(DLW) or TEE(PAQ) versus TEE(DLW)). Only 2 of 23 PAQs assessed most types of activity over the past year and indicated acceptable criterion validity, with mean differences (TEE(PAQ) - TEE(DLW)) of 10% and 2% and correlation coefficients of 0.62 and 0.63, respectively. At the group level, neither overreporting nor underreporting was more prevalent across studies. We speculate that, aside from reporting error, discrepancies between PAQ and DLW estimates may be partly attributable to 1) PAQs not including key activities related to AEE, 2) PAQs and DLW ascertaining different time periods, or 3) inaccurate assignment of metabolic equivalents to self-reported activities. Small sample sizes, use of correlation coefficients, and limited information on individual validity were problematic. Future research should address these issues to clarify the true validity of PAQs for estimating AEE.

  18. Model selection and Bayesian inference for high-resolution seabed reflection inversion.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2009-02-01

    This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.

  19. Assessing the Quality of Mobile Exercise Apps Based on the American College of Sports Medicine Guidelines: A Reliable and Valid Scoring Instrument.

    PubMed

    Guo, Yi; Bian, Jiang; Leavitt, Trevor; Vincent, Heather K; Vander Zalm, Lindsey; Teurlings, Tyler L; Smith, Megan D; Modave, François

    2017-03-07

    Regular physical activity can not only help with weight management, but also lower cardiovascular risks, cancer rates, and chronic disease burden. Yet, only approximately 20% of Americans currently meet the physical activity guidelines recommended by the US Department of Health and Human Services. With the rapid development of mobile technologies, mobile apps have the potential to improve participation rates in exercise programs, particularly if they are evidence-based and are of sufficient content quality. The goal of this study was to develop and test an instrument, which was designed to score the content quality of exercise program apps with respect to the exercise guidelines set forth by the American College of Sports Medicine (ACSM). We conducted two focus groups (N=14) to elicit input for developing a preliminary 27-item scoring instruments based on the ACSM exercise prescription guidelines. Three reviewers who were no sports medicine experts independently scored 28 exercise program apps using the instrument. Inter- and intra-rater reliability was assessed among the 3 reviewers. An expert reviewer, a Fellow of the ACSM, also scored the 28 apps to create criterion scores. Criterion validity was assessed by comparing nonexpert reviewers' scores to the criterion scores. Overall, inter- and intra-rater reliability was high with most coefficients being greater than .7. Inter-rater reliability coefficients ranged from .59 to .99, and intra-rater reliability coefficients ranged from .47 to 1.00. All reliability coefficients were statistically significant. Criterion validity was found to be excellent, with the weighted kappa statistics ranging from .67 to .99, indicating a substantial agreement between the scores of expert and nonexpert reviewers. Finally, all apps scored poorly against the ACSM exercise prescription guidelines. None of the apps received a score greater than 35, out of a possible maximal score of 70. We have developed and presented valid and reliable scoring instruments for exercise program apps. Our instrument may be useful for consumers and health care providers who are looking for apps that provide safe, progressive general exercise programs for health and fitness. ©Yi Guo, Jiang Bian, Trevor Leavitt, Heather K Vincent, Lindsey Vander Zalm, Tyler L Teurlings, Megan D Smith, François Modave. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 07.03.2017.

  20. TH-A-18C-03: Noise Correlation in CBCT Projection Data and Its Application for Noise Reduction in Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ZHANG, H; Huang, J; Ma, J

    2014-06-15

    Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are about 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. Conclusion: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for low-dose CBCT.« less

  1. Multidimensional indexing structure for use with linear optimization queries

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Castelli, Vittorio (Inventor); Chang, Yuan-Chi (Inventor); Li, Chung-Sheng (Inventor); Smith, John Richard (Inventor)

    2002-01-01

    Linear optimization queries, which usually arise in various decision support and resource planning applications, are queries that retrieve top N data records (where N is an integer greater than zero) which satisfy a specific optimization criterion. The optimization criterion is to either maximize or minimize a linear equation. The coefficients of the linear equation are given at query time. Methods and apparatus are disclosed for constructing, maintaining and utilizing a multidimensional indexing structure of database records to improve the execution speed of linear optimization queries. Database records with numerical attributes are organized into a number of layers and each layer represents a geometric structure called convex hull. Such linear optimization queries are processed by searching from the outer-most layer of this multi-layer indexing structure inwards. At least one record per layer will satisfy the query criterion and the number of layers needed to be searched depends on the spatial distribution of records, the query-issued linear coefficients, and N, the number of records to be returned. When N is small compared to the total size of the database, answering the query typically requires searching only a small fraction of all relevant records, resulting in a tremendous speedup as compared to linearly scanning the entire dataset.

  2. [Is a specific disorder of arithmetic skills as common as reading/spelling disorder?].

    PubMed

    Wyschkon, Anne; Kohn, Juliane; Ballaschk, Katja; Esser, Günter

    2009-11-01

    Referring to the prevalence rates of learning disorders in the research literature, the numbers of mathematics disorder and reading/spelling disorder are often reported to be identical. However, the correlation between intelligence level and reading/spelling skills is much weaker than between intelligence and arithmetic skills. If the same definition criterion is applied to both disorders, a lower prevalence rate for mathematics disorder should be expected. Are there differences in the prevalence estimates for learning disorders depending on the definition criterion? A large representative sample of German students (N=1970) was used to review the hypothesis. Depending on the definition criterion, we could show a prevalence range of mathematics disorder between 0.1% and 8.1% in the same sample. Using the same definition criterion for both learning disorders, there are two to three times as many students with reading/spelling disorder than those with mathematics disorder. Whenever children with reading/spelling disorder are compared to children with mathematics disorder, the same definition criterion has to be applied.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albeverio, Sergio; Chen Kai; Fei Shaoming

    A necessary separability criterion that relates the structures of the total density matrix and its reductions is given. The method used is based on the realignment method [K. Chen and L. A. Wu, Quant. Inf. Comput. 3, 193 (2003)]. The separability criterion naturally generalizes the reduction separability criterion introduced independently in the previous work [M. Horodecki and P. Horodecki, Phys. Rev. A 59, 4206 (1999) and N. J. Cerf, C. Adami, and R. M. Gingrich, Phys. Rev. A 60, 898 (1999)]. In special cases, it recovers the previous reduction criterion and the recent generalized partial transposition criterion [K. Chen andmore » L. A. Wu, Phys. Lett. A 306, 14 (2002)]. The criterion involves only simple matrix manipulations and can therefore be easily applied.« less

  4. Predictability of Seasonal Rainfall over the Greater Horn of Africa

    NASA Astrophysics Data System (ADS)

    Ngaina, J. N.

    2016-12-01

    The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions

  5. Lawson criterion in cyclotron heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demutskii, V.P.; Polovin, R.V.

    1975-07-01

    Stochastic heating of plasma particles is of great interest for controlled thermonuclear reactions. The ion velocity distribution function is described for the case of cyclotron heating. The Lawson criterion applied to this distribution is described. (MOW)

  6. The Physical Significance of the Synthetic Running Correlation Coefficient and Its Applications in Oceanic and Atmospheric Studies

    NASA Astrophysics Data System (ADS)

    Zhao, Jinping; Cao, Yong; Wang, Xin

    2018-06-01

    In order to study the temporal variations of correlations between two time series, a running correlation coefficient (RCC) could be used. An RCC is calculated for a given time window, and the window is then moved sequentially through time. The current calculation method for RCCs is based on the general definition of the Pearson product-moment correlation coefficient, calculated with the data within the time window, which we call the local running correlation coefficient (LRCC). The LRCC is calculated via the two anomalies corresponding to the two local means, meanwhile, the local means also vary. It is cleared up that the LRCC reflects only the correlation between the two anomalies within the time window but fails to exhibit the contributions of the two varying means. To address this problem, two unchanged means obtained from all available data are adopted to calculate an RCC, which is called the synthetic running correlation coefficient (SRCC). When the anomaly variations are dominant, the two RCCs are similar. However, when the variations of the means are dominant, the difference between the two RCCs becomes obvious. The SRCC reflects the correlations of both the anomaly variations and the variations of the means. Therefore, the SRCCs from different time points are intercomparable. A criterion for the superiority of the RCC algorithm is that the average value of the RCC should be close to the global correlation coefficient calculated using all data. The SRCC always meets this criterion, while the LRCC sometimes fails. Therefore, the SRCC is better than the LRCC for running correlations. We suggest using the SRCC to calculate the RCCs.

  7. Criterion Predictability: Identifying Differences Between [r-squares

    ERIC Educational Resources Information Center

    Malgady, Robert G.

    1976-01-01

    An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)

  8. Identification of hereditary cancer in the general population: development and validation of a screening questionnaire for obtaining the family history of cancer.

    PubMed

    Campacci, Natalia; de Lima, Juliana O; Carvalho, André L; Michelli, Rodrigo D; Haikel, Rafael; Mauad, Edmundo; Viana, Danilo V; Melendez, Matias E; Vazquez, Fabiana de L; Zanardo, Cleyton; Reis, Rui M; Rossi, Benedito M; Palmero, Edenir I

    2017-12-01

    One of the challenges for Latin American countries is to include in their healthcare systems technologies that can be applied to hereditary cancer detection and management. The aim of the study is to create and validate a questionnaire to identify individuals with possible risk for hereditary cancer predisposition syndromes (HCPS), using different strategies in a Cancer Prevention Service in Brazil. The primary screening questionnaire (PSQ) was developed to identify families at-risk for HCPS. The PSQ was validated using discrimination measures, and the reproducibility was estimated through kappa coefficient. Patients with at least one affirmative answer had the pedigree drawn using three alternative interview approaches: in-person, by telephone, or letter. Validation of these approaches was done. Kappa and intraclass correlation coefficients were used to analyze data's reproducibility considering the presence of clinical criteria for HCPS. The PSQ was applied to a convenience sample of 20,000 women of which 3121 (15.6%) answered at least one affirmative question and 1938 had their pedigrees drawn. The PSQ showed sensitivity and specificity scores of 94.4% and 75%, respectively, and a kappa of 0.64. The strategies for pedigree drawing had reproducibility coefficients of 0.976 and 0.850 for the telephone and letter approaches, respectively. Pedigree analysis allowed us to identify 465 individuals (24.0%) fulfilling at least one clinical criterion for HCPS. The PSQ fulfills its function, allowing the identification of HCPS at-risk families. The use of alternative screening methods may reduce the number of excluded at-risk individuals/families who live in locations where oncogenetic services are not established. © 2017 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  9. Numerical Modeling of Earthquake-Induced Landslide Using an Improved Discontinuous Deformation Analysis Considering Dynamic Friction Degradation of Joints

    NASA Astrophysics Data System (ADS)

    Huang, Da; Song, Yixiang; Cen, Duofeng; Fu, Guoyang

    2016-12-01

    Discontinuous deformation analysis (DDA) as an efficient technique has been extensively applied in the dynamic simulation of discontinuous rock mass. In the original DDA (ODDA), the Mohr-Coulomb failure criterion is employed as the judgment principle of failure between contact blocks, and the friction coefficient is assumed to be constant in the whole calculation process. However, it has been confirmed by a host of shear tests that the dynamic friction of rock joints degrades. Therefore, the friction coefficient should be gradually reduced during the numerical simulation of an earthquake-induced rockslide. In this paper, based on the experimental results of cyclic shear tests on limestone joints, exponential regression formulas are fitted for dynamic friction degradation, which is a function of the relative velocity, the amplitude of cyclic shear displacement and the number of its cycles between blocks with an edge-to-edge contact. Then, an improved DDA (IDDA) is developed by implementing the fitting regression formulas and a modified removing technique of joint cohesion, in which the cohesion is removed once the `sliding' or `open' state between blocks appears for the first time, into the ODDA. The IDDA is first validated by comparing with the theoretical solutions of the kinematic behaviors of a sliding block on an inclined plane under dynamic loading. Then, the program is applied to model the Donghekou landslide triggered by the 2008 Wenchuan earthquake in China. The simulation results demonstrate that the dynamic friction degradation of joints has great influences on the runout and velocity of sliding mass. Moreover, the friction coefficient possesses higher impact than the cohesion of joints on the kinematic behaviors of the sliding mass.

  10. Aerodynamic Characteristics of a 14-Percent-Thick NASA Supercritical Airfoil Designed for a Normal-Force Coefficient of 0.7

    NASA Technical Reports Server (NTRS)

    Harris, C. D.

    1975-01-01

    This report documents the experimental aerodynamic characteristics of a 14 percent thick supercritical airfoil based on an off design sonic pressure plateau criterion. The design normal force coefficient was 0.7. The results are compared with those of the family related 10 percent thick supercritical airfoil 33. Comparisons are also made between experimental and theoretical characteristics and composite drag rise characteristics derived for a full scale Reynolds number of 40 million.

  11. Determination of the mass transfer limiting step of dye adsorption onto commercial adsorbent by using mathematical models.

    PubMed

    Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov

    2014-01-01

    Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.

  12. Excellent vacuum tribological properties of Pb/PbS film deposited by RF magnetron sputtering and ion sulfurizing.

    PubMed

    Guozheng, Ma; Binshi, Xu; Haidou, Wang; Shuying, Chen; Zhiguo, Xing

    2014-01-08

    Soft metal Pb film of 3 μm in thickness was deposited on AISI 440C steel by RF magnetron sputtering, and then some of the Pb film samples were treated by low-temperature ion sulfurizing (LTIS) and formed Pb/PbS composite film. Tribological properties of the Pb and Pb/PbS films were tested contrastively in vacuum and air condition using a self-developed tribometer (model of MSTS-1). Scanning electron microscopy (SEM), X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS) were adopted to analyze the microstructure and chemical construction of the films and their worn surfaces. The results show that a mass of Pb was changed to PbS during the process of LTIS. In air condition, owing to the severe oxidation effect, pure Pb film showed relatively high friction coefficients (0.6), and Pb/PbS composite film also lost its friction-reduction property after sliding for a short time. In a vacuum, the average friction coefficients of Pb film were about 0.1, but the friction coefficient curve fluctuated obviously. And the Pb/PbS composite film exhibited excellent tribological properties in vacuum condition. Its friction coefficients keep stable at a low value of about 0.07 for a long time. If takes the value of friction coefficients exceeding 0.2 continuously as a criterion of lubrication failure, the sliding friction life of Pb/PbS film was as long as 3.2 × 10(5) r, which is 8 times of that of the Pb film. It can be concluded that the Pb/PbS film has excellent vacuum tribological properties and important foreground for applying in space solid lubrication related fields.

  13. Combined Optimal Control System for excavator electric drive

    NASA Astrophysics Data System (ADS)

    Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.

    2018-03-01

    The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).

  14. Spatial scan statistics for detection of multiple clusters with arbitrary shapes.

    PubMed

    Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray

    2016-12-01

    In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.

  15. Validation of the Tuebingen CD-25 Inventory as a Measure of Postoperative Health-Related Quality of Life in Patients Treated for Cushing's Disease.

    PubMed

    Milian, Monika; Kreitschmann-Andermahr, Ilonka; Siegel, Sonja; Kleist, Bernadette; Führer-Sakel, Dagmar; Honegger, Juergen; Buchfelder, Michael; Psaras, Tsambika

    2015-01-01

    To evaluate the construct and criterion validity of the Tuebingen Cushing's disease quality of life inventory (Tuebingen CD-25) for application in patients treated for Cushing's disease (CD). A total of 176 patients with adrenocorticotropin hormone-dependent CD (144 of them female, overall mean age 46.1 ± 13.7 years) treated at 3 large tertiary referral centers in Germany were studied. Construct validity was assessed by hypothesis testing (self-perceived symptom reduction assessment) and contrasted groups (patients with vs. without hypercorticolism). For this purpose, already existing data from 55 CD patients was used, representing the hypercortisolemic group. Criterion validity (concurrent validity) was assessed in relation to the Cushing's quality of life questionnaire (CushingQoL), the Short Form 36 health survey (SF-36), and the body mass index (BMI). Patients with self-perceived remarkable symptom reduction had significant lower Tuebingen CD-25 scores (i.e. better health-related quality of life) than patients with self-perceived insufficient symptom reduction (p < 0.05). Similarly, the mean scores of the Tuebingen CD-25 scales were lower in patients without hypercortisolism (total score 27.0 ± 17.2) compared to those with hypercortisolism (total score 45.3 ± 22.1; each p < 0.05), providing evidence for construct validity. Criterion validity was confirmed by the correlations between the Tuebingen CD-25 total score and the CushingQoL (Spearman's coefficient -0.733), as well as all scales of the SF-36 (Spearman's coefficient between -0.447 and -0.700). The analyses presented in this large-sample study provide robust evidence for the construct and criterion validity of the Tuebingen CD-25. © 2015 S. Karger AG, Basel.

  16. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  18. The Role of Testing in Affirmative Action.

    ERIC Educational Resources Information Center

    Manning, Winton H.

    Graphs and charts pertaining to testing in affirmative action are presented. Data concern the following: the predictive validity of College Board admissions tests using freshman grade point average as the criterion; validity coefficients of undergraduate grade point average (UGPA) alone, Law School Admission Test (LSAT) scores, and undergraduate…

  19. A Monte Carlo Program for Simulating Selection Decisions from Personnel Tests

    ERIC Educational Resources Information Center

    Petersen, Calvin R.; Thain, John W.

    1976-01-01

    Relative to test and criterion parameters and cutting scores, the correlation coefficient, sample size, and number of samples to be drawn (all inputs), this program calculates decision classification rates across samples and for combined samples. Several other related indices are also computed. (Author)

  20. An adapted yield criterion for the evolution of subsequent yield surfaces

    NASA Astrophysics Data System (ADS)

    Küsters, N.; Brosius, A.

    2017-09-01

    In numerical analysis of sheet metal forming processes, the anisotropic material behaviour is often modelled with isotropic work hardening and an average Lankford coefficient. In contrast, experimental observations show an evolution of the Lankford coefficients, which can be associated with a yield surface change due to kinematic and distortional hardening. Commonly, extensive efforts are carried out to describe these phenomena. In this paper an isotropic material model based on the Yld2000-2d criterion is adapted with an evolving yield exponent in order to change the yield surface shape. The yield exponent is linked to the accumulative plastic strain. This change has the effect of a rotating yield surface normal. As the normal is directly related to the Lankford coefficient, the change can be used to model the evolution of the Lankford coefficient during yielding. The paper will focus on the numerical implementation of the adapted material model for the FE-code LS-Dyna, mpi-version R7.1.2-d. A recently introduced identification scheme [1] is used to obtain the parameters for the evolving yield surface and will be briefly described for the proposed model. The suitability for numerical analysis will be discussed for deep drawing processes in general. Efforts for material characterization and modelling will be compared to other common yield surface descriptions. Besides experimental efforts and achieved accuracy, the potential of flexibility in material models and the risk of ambiguity during identification are of major interest in this paper.

  1. Symbolic computation of the Birkhoff normal form in the problem of stability of the triangular libration points

    NASA Astrophysics Data System (ADS)

    Shevchenko, I. I.

    2008-05-01

    The problem of stability of the triangular libration points in the planar circular restricted three-body problem is considered. A software package, intended for normalization of autonomous Hamiltonian systems by means of computer algebra, is designed so that normalization problems of high analytical complexity could be solved. It is used to obtain the Birkhoff normal form of the Hamiltonian in the given problem. The normalization is carried out up to the 6th order of expansion of the Hamiltonian in the coordinates and momenta. Analytical expressions for the coefficients of the normal form of the 6th order are derived. Though intermediary expressions occupy gigabytes of the computer memory, the obtained coefficients of the normal form are compact enough for presentation in typographic format. The analogue of the Deprit formula for the stability criterion is derived in the 6th order of normalization. The obtained floating-point numerical values for the normal form coefficients and the stability criterion confirm the results by Markeev (1969) and Coppola and Rand (1989), while the obtained analytical and exact numeric expressions confirm the results by Meyer and Schmidt (1986) and Schmidt (1989). The given computational problem is solved without constructing a specialized algebraic processor, i.e., the designed computer algebra package has a broad field of applicability.

  2. Genetic parameters for stayability to consecutive calvings in Zebu cattle.

    PubMed

    Silva, D O; Santana, M L; Ayres, D R; Menezes, G R O; Silva, L O C; Nobre, P R C; Pereira, R J

    2017-12-22

    Longer-lived cows tend to be more profitable and the stayability trait is a selection criterion correlated to longevity. An alternative to the traditional approach to evaluate stayability is its definition based on consecutive calvings, whose main advantage is the more accurate evaluation of young bulls. However, no study using this alternative approach has been conducted for Zebu breeds. Therefore, the objective of this study was to compare linear random regression models to fit stayability to consecutive calvings of Guzerá, Nelore and Tabapuã cows and to estimate genetic parameters for this trait in the respective breeds. Data up to the eighth calving were used. The models included the fixed effects of age at first calving and year-season of birth of the cow and the random effects of contemporary group, additive genetic, permanent environmental and residual. Random regressions were modeled by orthogonal Legendre polynomials of order 1 to 4 (2 to 5 coefficients) for contemporary group, additive genetic and permanent environmental effects. Using Deviance Information Criterion as the selection criterion, the model with 4 regression coefficients for each effect was the most adequate for the Nelore and Tabapuã breeds and the model with 5 coefficients is recommended for the Guzerá breed. For Guzerá, heritabilities ranged from 0.05 to 0.08, showing a quadratic trend with a peak between the fourth and sixth calving. For the Nelore and Tabapuã breeds, the estimates ranged from 0.03 to 0.07 and from 0.03 to 0.08, respectively, and increased with increasing calving number. The additive genetic correlations exhibited a similar trend among breeds and were higher for stayability between closer calvings. Even between more distant calvings (second v. eighth), stayability showed a moderate to high genetic correlation, which was 0.77, 0.57 and 0.79 for the Guzerá, Nelore and Tabapuã breeds, respectively. For Guzerá, when the models with 4 or 5 regression coefficients were compared, the rank correlations between predicted breeding values for the intercept were always higher than 0.99, indicating the possibility of practical application of the least parameterized model. In conclusion, the model with 4 random regression coefficients is recommended for the genetic evaluation of stayability to consecutive calvings in Zebu cattle.

  3. Portuguese-language version of the Chronic Respiratory Questionnaire: a validity and reproducibility study.

    PubMed

    Moreira, Graciane Laender; Pitta, Fábio; Ramos, Dionei; Nascimento, Cinthia Sousa Carvalho; Barzon, Danielle; Kovelis, Demétria; Colange, Ana Lúcia; Brunetto, Antonio Fernando; Ramos, Ercy Mara Cipulo

    2009-08-01

    To determine the validity and reproducibility of a Portuguese-language version of the Chronic Respiratory Questionnaire (CRQ) in patients with COPD. A Portuguese-language version of the CRQ (provided by McMaster University, the holder of the questionnaire copyright) was applied to 50 patients with COPD (70 +/- 8 years of age; 32 males; FEV1 = 47 +/- 18% of predicted) on two occasions, one week apart. The CRQ has four domains (dyspnea, fatigue, emotional function, and mastery) and was applied as an interviewer-administered instrument. The Saint George's Respiratory Questionnaire (SGRQ), already validated for use in Brazil, was used as the criterion for validation. Spirometry and the six-minute walk test (6MWT) were performed to analyze the correlations with the CRQ scores. There were no significant CRQ test-retest differences (p > 0.05 for all domains). The test-retest intraclass correlation coefficient was 0.98, 0.97, 0.98 and 0.95 for the dyspnea, fatigue, emotional function and mastery domains, respectively. The Cronbach's alpha coefficient was 0.91. The CRQ domains correlated significantly with the SGRQ domains (-0.30 < r < -0.67; p < 0.05). There were no significant correlations between spirometric variables and the CRQ domains or between the CRQ domains and the 6MWT, with the exception of the fatigue domain (r = 0.30; p = 0.04). The Portuguese-language version of the CRQ proved to be reproducible and valid for use in Brazilian patients with COPD.

  4. Corrigendum

    NASA Astrophysics Data System (ADS)

    Faghihi, M.; Scheffel, J.

    1988-12-01

    A minor correction, having no major influence on our results, is reported here. The coefficients in the equations of state (16) and (17) should read The set of equations (13)-(20) now comprise the correct, linearized and Fourierdecomposed double adiabatic equations in cylindrical geometry. In addition, there is a printing error in (15): a factor bz should multiply the last term of the left-hand side. Our results are only slightly modified, and the discussion remains unchanged. We wish, however, to point out that the correct stability criterion for isotropic pressure, (26), should be This is the double adiabatic counterpart to the m ╪ 0 Kadomtsev criterion of ideal MHD.

  5. Reliability Estimates for Undergraduate Grade Point Average

    ERIC Educational Resources Information Center

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  6. An approximate spin design criterion for monoplanes, 1 May 1939

    NASA Technical Reports Server (NTRS)

    Seidman, O.; Donlan, C. J.

    1976-01-01

    An approximate empirical criterion, based on the projected side area and the mass distribution of the airplane, was formulated. The British results were analyzed and applied to American designs. A simpler design criterion, based solely on the type and the dimensions of the tail, was developed; it is useful in a rapid estimation of whether a new design is likely to comply with the minimum requirements for safety in spinning.

  7. Noise correlation in CBCT projection data and its application for noise reduction in low-dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hua; Ouyang, Luo; Wang, Jing, E-mail: jhma@smu.edu.cn, E-mail: jing.wang@utsouthwestern.edu

    2014-03-15

    Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, the authors systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam onboard CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are nonzero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second-order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. At the 2.0 mm resolution level in the axial-plane noise resolution tradeoff analysis, the noise level of the PWLS-Cor reconstruction is 6.3% lower than that of the PWLS-Dia reconstruction. Conclusions: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for low-dose CBCT.« less

  8. The construct and criterion validity of the multi-source feedback process to assess physician performance: a meta-analysis

    PubMed Central

    Al Ansari, Ahmed; Donnon, Tyrone; Al Khalifa, Khalid; Darwish, Abdulla; Violato, Claudio

    2014-01-01

    Background The purpose of this study was to conduct a meta-analysis on the construct and criterion validity of multi-source feedback (MSF) to assess physicians and surgeons in practice. Methods In this study, we followed the guidelines for the reporting of observational studies included in a meta-analysis. In addition to PubMed and MEDLINE databases, the CINAHL, EMBASE, and PsycINFO databases were searched from January 1975 to November 2012. All articles listed in the references of the MSF studies were reviewed to ensure that all relevant publications were identified. All 35 articles were independently coded by two authors (AA, TD), and any discrepancies (eg, effect size calculations) were reviewed by the other authors (KA, AD, CV). Results Physician/surgeon performance measures from 35 studies were identified. A random-effects model of weighted mean effect size differences (d) resulted in: construct validity coefficients for the MSF system on physician/surgeon performance across different levels in practice ranged from d=0.14 (95% confidence interval [CI] 0.40–0.69) to d=1.78 (95% CI 1.20–2.30); construct validity coefficients for the MSF on physician/surgeon performance on two different occasions ranged from d=0.23 (95% CI 0.13–0.33) to d=0.90 (95% CI 0.74–1.10); concurrent validity coefficients for the MSF based on differences in assessor group ratings ranged from d=0.50 (95% CI 0.47–0.52) to d=0.57 (95% CI 0.55–0.60); and predictive validity coefficients for the MSF on physician/surgeon performance across different standardized measures ranged from d=1.28 (95% CI 1.16–1.41) to d=1.43 (95% CI 0.87–2.00). Conclusion The construct and criterion validity of the MSF system is supported by small to large effect size differences based on the MSF process and physician/surgeon performance across different clinical and nonclinical domain measures. PMID:24600300

  9. Defining applied behavior analysis: An historical analogy

    PubMed Central

    Deitz, Samuel M.

    1982-01-01

    This article examines two criteria for a definition of applied behavior analysis. The criteria are derived from a 19th century attempt to establish medicine as a scientific field. The first criterion, experimental determinism, specifies the methodological boundaries of an experimental science. The second criterion, philosophic doubt, clarifies the tentative nature of facts and theories derived from those facts. Practices which will advance the science of behavior are commented upon within each criteria. To conclude, the problems of a 19th century form of empiricism in medicine are related to current practices in applied behavior analysis. PMID:22478557

  10. Bayesian Decision Tree for the Classification of the Mode of Motion in Single-Molecule Trajectories

    PubMed Central

    Türkcan, Silvan; Masson, Jean-Baptiste

    2013-01-01

    Membrane proteins move in heterogeneous environments with spatially (sometimes temporally) varying friction and with biochemical interactions with various partners. It is important to reliably distinguish different modes of motion to improve our knowledge of the membrane architecture and to understand the nature of interactions between membrane proteins and their environments. Here, we present an analysis technique for single molecule tracking (SMT) trajectories that can determine the preferred model of motion that best matches observed trajectories. The method is based on Bayesian inference to calculate the posteriori probability of an observed trajectory according to a certain model. Information theory criteria, such as the Bayesian information criterion (BIC), the Akaike information criterion (AIC), and modified AIC (AICc), are used to select the preferred model. The considered group of models includes free Brownian motion, and confined motion in 2nd or 4th order potentials. We determine the best information criteria for classifying trajectories. We tested its limits through simulations matching large sets of experimental conditions and we built a decision tree. This decision tree first uses the BIC to distinguish between free Brownian motion and confined motion. In a second step, it classifies the confining potential further using the AIC. We apply the method to experimental Clostridium Perfingens -toxin (CPT) receptor trajectories to show that these receptors are confined by a spring-like potential. An adaptation of this technique was applied on a sliding window in the temporal dimension along the trajectory. We applied this adaptation to experimental CPT trajectories that lose confinement due to disaggregation of confining domains. This new technique adds another dimension to the discussion of SMT data. The mode of motion of a receptor might hold more biologically relevant information than the diffusion coefficient or domain size and may be a better tool to classify and compare different SMT experiments. PMID:24376584

  11. An improved partial least-squares regression method for Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  12. Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.

    PubMed

    Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin

    2013-01-01

    A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).

  13. Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology

    PubMed Central

    Wu, Shibin; Xie, Yaoqin

    2013-01-01

    A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072

  14. Functional feature embedded space mapping of fMRI data.

    PubMed

    Hu, Jin; Tian, Jie; Yang, Lei

    2006-01-01

    We have proposed a new method for fMRI data analysis which is called Functional Feature Embedded Space Mapping (FFESM). Our work mainly focuses on the experimental design with periodic stimuli which can be described by a number of Fourier coefficients in the frequency domain. A nonlinear dimension reduction technique Isomap is applied to the high dimensional features obtained from frequency domain of the fMRI data for the first time. Finally, the presence of activated time series is identified by the clustering method in which the information theoretic criterion of minimum description length (MDL) is used to estimate the number of clusters. The feasibility of our algorithm is demonstrated by real human experiments. Although we focus on analyzing periodic fMRI data, the approach can be extended to analyze non-periodic fMRI data (event-related fMRI) by replacing the Fourier analysis with a wavelet analysis.

  15. Evaluation of the best fit distribution for partial duration series of daily rainfall in Madinah, western Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.

    2014-09-01

    Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.

  16. Experimental- Analitical Procedure Of Definition Of Thermal Fluxes On The Head Fairing Of Launch Vehicles

    NASA Astrophysics Data System (ADS)

    Yurchenko, I.; Karakotin, I.; Kudinov, A.

    2011-05-01

    Minimization of head fairing heat protection shield weight during spacecraft injecting in atmosphere dense layers is a complicated task. The identification of heat transfer coefficient on heat protection shield surface during injection can be considered as a primary task to be solved with certain accuracy in order to minimize heat shield weight as well as meet reliability requirements. The height of the roughness around sound point on the head fairing spherical nose tip has a great influence on the heat transfer coefficient calculation. As it has found out during flight tests the height of the roughness makes possible to create boundary layer transition criterion on the head fairing in flight. Therefore the second task is an assessment how height of the roughness influences on the total incoming heat flux to the head fairing. And finally the third task is associated with correct implementation of the first task results, as there are changing boundary conditions during a flight such as bubbles within heat shield surface paint and thermal protection ablation for instance. In the article we have considered results of flight tests carried out using launch vehicles which allowed us to measure heat fluxes in flight and to estimate dispersions of heat transfer coefficient. The experimental-analytical procedure of defining heat fluxes on the LV head fairings has been presented. The procedure includes: - calculation of general-purpose dimensionless heat transfer coefficient - Nusselt number Nueff - based on the proposed effective temperature Teff method. The method allows calculate the Nusselt number values for cylindrical surfaces as well as dispersions of heat transfer coefficient; - universal criterion of turbulent-laminar transition for blunted head fairings - Reynolds number Reek = [ρеUеk/μе]TR = const , which gives the best correlation of all dates of flight experiment carried out per Reda procedure to define turbulent-laminar transition in boundary layer. The criterion allows defining time margins when turbulent flux on space head surfaces exists. It was defined that in conditions when high background disturbances of free stream flux while main LV engines operating join with integrated roughness influence the critical value of Reynolds number is an order-diminished value compared to values obtained in wind tunnels and in free flight. Influence of minimization of height of surface roughness near sound point on head fairing nose has been estimated. It has been found that the criterion of turbulent-laminar transition for smooth head fairings elements - Reynolds number - reaches the limit value which is equal to 200. This value is obtained from momentum thickness Reynolds number when roughness height is close to zero. So the turbulent- laminar flux transition occurs earlier with decreased duration of effect of high turbulent heat fluxes to the heat shield. This will allow decreasing head shield thickness up to 30%

  17. Three-Dimensional Dynamic Rupture in Brittle Solids and the Volumetric Strain Criterion

    NASA Astrophysics Data System (ADS)

    Uenishi, K.; Yamachi, H.

    2017-12-01

    As pointed out by Uenishi (2016 AGU Fall Meeting), source dynamics of ordinary earthquakes is often studied in the framework of 3D rupture in brittle solids but our knowledge of mechanics of actual 3D rupture is limited. Typically, criteria derived from 1D frictional observations of sliding materials or post-failure behavior of solids are applied in seismic simulations, and although mode-I cracks are frequently encountered in earthquake-induced ground failures, rupture in tension is in most cases ignored. Even when it is included in analyses, the classical maximum principal tensile stress rupture criterion is repeatedly used. Our recent basic experiments of dynamic rupture of spherical or cylindrical monolithic brittle solids by applying high-voltage electric discharge impulses or impact loads have indicated generation of surprisingly simple and often flat rupture surfaces in 3D specimens even without the initial existence of planes of weakness. However, at the same time, the snapshots taken by a high-speed digital video camera have shown rather complicated histories of rupture development in these 3D solid materials, which seem to be difficult to be explained by, for example, the maximum principal stress criterion. Instead, a (tensile) volumetric strain criterion where the volumetric strain (dilatation or the first invariant of the strain tensor) is a decisive parameter for rupture seems more effective in computationally reproducing the multi-directionally propagating waves and rupture. In this study, we try to show the connection between this volumetric strain criterion and other classical rupture criteria or physical parameters employed in continuum mechanics, and indicate that the criterion has, to some degree, physical meanings. First, we mathematically illustrate that the criterion is equivalent to a criterion based on the mean normal stress, a crucial parameter in plasticity. Then, we mention the relation between the volumetric strain criterion and the failure envelope of the Mohr-Coulomb criterion that describes shear-related rupture. The critical value of the volumetric strain for rupture may be controlled by the apparent cohesion and apparent angle of internal friction of the Mohr-Coulomb criterion.

  18. Accuracy of State-of-the-Art Actuator-Line Modeling for Wind Turbine Wakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jha, Pankaj; Churchfield, Matthew; Moriarty, Patrick

    The current actuator line method (ALM) within an OpenFOAM computational fluid dynamics (CFD) solver was used to perform simulations of the NREL Phase VI rotor under rotating and parked conditions, two fixed-wing designs both with an elliptic spanwise loading, and the NREL 5-MW turbine. The objective of this work is to assess and improve the accuracy of the state-of-the-art ALM in predicting rotor blade loads, particularly by focusing on the method used to project the actuator forces onto the flow field as body forces. Results obtained for sectional normal and tangential force coefficients were compared to available experimental data andmore » to the in-house performance code XTurb-PSU. It was observed that the ALM results agree well with measured data and results obtained from XTurb-PSU except in the root and tip regions if a three-dimensional Gaussian of width, ε, constant along the blade span is used to project the actuator force onto the flow field. A new method is proposed where the Gaussian width, ε, varies along the blade span following an elliptic distribution. A general criterion is derived that applies to any planform shape. It is found that the new criterion for ε leads to improved prediction of blade tip loads for a variety of blade planforms and rotor conditions considered.« less

  19. High pressure studies on structural and secondary relaxation dynamics in silyl derivative of D-glucose

    NASA Astrophysics Data System (ADS)

    Minecka, Aldona; Kamińska, Ewa; Tarnacka, Magdalena; Dzienia, Andrzej; Madejczyk, Olga; Waliłko, Patrycja; Kasprzycka, Anna; Kamiński, Kamil; Paluch, Marian

    2017-08-01

    In this paper, broadband dielectric spectroscopy was applied to investigate molecular dynamics of 1,2,3,4,6-penta-O-(trimethylsilyl)-D-glucopyranose (S-GLU) at ambient and elevated pressures. Our studies showed that apart from the structural relaxation, one well resolved asymmetric secondary process (initially labeled as β) is observed in the spectra measured at p = 0.1 MPa. Analysis with the use of the coupling model and criterion proposed by Ngai and Capaccioli indicated that the β-process in S-GLU is probably a Johari-Goldstein relaxation of intermolecular origin. Further high pressure experiments demonstrated that there are in fact two secondary processes contributing to the β-relaxation. Therefore, one can postulate that the coupling model is a necessary, but not sufficient criterion to identify the true nature of the given secondary relaxation process. The role of pressure experiments in better understanding of the molecular origin of local mobility seems to be much more important. Interestingly, our research also revealed that the structural relaxation in S-GLU is very sensitive to compression. It was reflected in an extremely high pressure coefficient of the glass transition temperature (dTg/dp = 412 K/GPa). According to the literature data, such a high value of dTg/dp has not been obtained so far for any H-bonded, van der Waals, or polymeric glass-formers.

  20. Factors Determining Success in Youth Judokas

    PubMed Central

    Krstulović, Saša; Caput, Petra Đapić

    2017-01-01

    Abstract The aim of this study was to compare two models of determining factors for success in judo. The first model (Model A) included testing motor abilities of high-level Croatian judokas in the cadet age category. The sample in Model A consisted of 71 male and female judokas aged 16 ± 0.6 years who were divided into four subsamples according to sex and weight category. The second model (Model B) consisted of interviewing 40 top-level judo experts on the importance of motor abilities for cadets’ success in judo. According to Model A, the greatest impact on the criterion variable of success in males and females of heavier weight categories were variables assessing maximum strength, coordination and jumping ability. In the lighter weight male categories, the highest correlation with the criterion variable of success was the variable assessing agility. However, in the lighter weight female categories, the greatest impact on success had the variable assessing muscular endurance. In Model B, specific endurance was crucial for success in judo, while flexibility was the least important, regardless of sex and weight category. Spearman’s rank correlation coefficients showed that there were no significant correlations in the results obtained in Models A and B for all observed subsamples. Although no significant correlations between the factors for success obtained through Models A and B were found, common determinants of success, regardless of the applied model, were identified. PMID:28469759

  1. Generalized Majority Logic Criterion to Analyze the Statistical Strength of S-Boxes

    NASA Astrophysics Data System (ADS)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-05-01

    The majority logic criterion is applicable in the evaluation process of substitution boxes used in the advanced encryption standard (AES). The performance of modified or advanced substitution boxes is predicted by processing the results of statistical analysis by the majority logic criteria. In this paper, we use the majority logic criteria to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, the majority logic criterion is applied to AES, affine power affine (APA), Gray, Lui J, residue prime, S8 AES, Skipjack, and Xyi substitution boxes. The majority logic criterion is further extended into a generalized majority logic criterion which has a broader spectrum of analyzing the effectiveness of substitution boxes in image encryption applications. The integral components of the statistical analyses used for the generalized majority logic criterion are derived from results of entropy analysis, contrast analysis, correlation analysis, homogeneity analysis, energy analysis, and mean of absolute deviation (MAD) analysis.

  2. Identification of confounders in the assessment of the relationship between lead exposure and child development.

    PubMed

    Tong, I S; Lu, Y

    2001-01-01

    To explore the best approach to identify and adjust for confounders in epidemiologic practice. In the Port Pirie cohort study, the selection of covariates was based on both a priori and an empirical consideration. In an assessment of the relationship between exposure to environmental lead and child development, change-in-estimate (CE) and significance testing (ST) criteria were compared in identifying potential confounders. The Pearson correlation coefficients were used to evaluate the potential for collinearity between pairs of major quantitative covariates. In multivariate analyses, the effects of confounding factors were assessed with multiple linear regression models. The nature and number of covariates selected varied with different confounder selection criteria and different cutoffs. Four covariates (i.e., quality of home environment, socioeconomic status (SES), maternal intelligence, and parental smoking behaviour) met the conventional CE criterion (> or =10%), whereas 14 variables met the ST criterion (p < or = 0.25). However, the magnitude of the relationship between blood lead concentration and children's IQ differed slightly after adjustment for confounding, using either the CE (partial regression coefficient: -4.4; 95% confidence interval (CI): -0.5 to -8.3) or ST criterion (-4.3; 95% CI: -0.2 to -8.4). Identification and selection of confounding factors need to be viewed cautiously in epidemiologic studies. Either the CE (e.g., > or = 10%) or ST (e.g., p < or = 0.25) criterion may be implemented in identification of a potential confounder if a study sample is sufficiently large, and both the methods are subject to arbitrariness of selecting a cut-off point. In this study, the CE criterion (i.e., > or = 10%) appears to be more stringent than the ST method (i.e., p < or = 0.25) in the identification of confounders. However, the ST rule cannot be used to determine the trueness of confounding because it cannot reflect the causal relationship between the confounder and outcome. This study shows the complexities one can expect to encounter in the identification of and adjustment for confounders.

  3. SU-E-T-20: A Correlation Study of 2D and 3D Gamma Passing Rates for Prostate IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, D; Sun Yat-sen University Cancer Center, Guangzhou, Guangdong; Wang, B

    2015-06-15

    Purpose: To investigate the correlation between the two-dimensional gamma passing rate (2D %GP) and three-dimensional gamma passing rate (3D %GP) in prostate IMRT quality assurance. Methods: Eleven prostate IMRT plans were randomly selected from the clinical database and were used to obtain dose distributions in the phantom and patient. Three types of delivery errors (MLC bank sag errors, central MLC errors and monitor unit errors) were intentionally introduced to modify the clinical plans through an in-house Matlab program. This resulted in 187 modified plans. The 2D %GP and 3D %GP were analyzed using different dose-difference and distance-toagreement (1%-1mm, 2%-2mm andmore » 3%-3mm) and 20% dose threshold. The 2D %GP and 3D %GP were then compared not only for the whole region, but also for the PTVs and critical structures using the statistical Pearson’s correlation coefficient (γ). Results: For different delivery errors, the average comparison of 2D %GP and 3D %GP showed different conclusions. The statistical correlation coefficients between 2D %GP and 3D %GP for the whole dose distribution showed that except for 3%/3mm criterion, 2D %GP and 3D %GP of 1%/1mm criterion and 2%/2mm criterion had strong correlations (Pearson’s γ value >0.8). Compared with the whole region, the correlations of 2D %GP and 3D %GP for PTV were better (the γ value for 1%/1mm, 2%/2mm and 3%/3mm criterion was 0.959, 0.931 and 0.855, respectively). However for the rectum, there was no correlation between 2D %GP and 3D %GP. Conclusion: For prostate IMRT, the correlation between 2D %GP and 3D %GP for the PTV is better than that for normal structures. The lower dose-difference and DTA criterion shows less difference between 2D %GP and 3D %GP. Other factors such as the dosimeter characteristics and TPS algorithm bias may also influence the correlation between 2D %GP and 3D %GP.« less

  4. The psychometric properties of an Iranian translation of the Work Ability Index (WAI) questionnaire.

    PubMed

    Abdolalizadeh, M; Arastoo, A A; Ghsemzadeh, R; Montazeri, A; Ahmadi, K; Azizi, A

    2012-09-01

    This study was carried out to evaluate the psychometric properties of an Iranian translation of the Work Ability Index (WAI) questionnaire. In this methodological study, nurses and healthcare workers aged 40 years and older who worked in educational hospitals in Ahvaz (236 workers) in 2010, completed the questionnaire and 60 of the workers filled out the WAI questionnaire for the second time to ensure test-retest reliability. Forward-backward method was applied to translate the questionnaire from English into Persian. The psychometric properties of the Iranian translation of the WAI were assessed using the fallowing tests: Internal consistency (to test reliability), test-retest analysis, exploratory factor analysis (construct validity), discriminate validity by comparing the mean WAI score in two groups of the employees that had different levels of sick leave, criterion validity by determining the correlation between the Persian version of short form health survey (SF-36) and WAI score. Cronbach's alpha coefficient was estimated to be 0.79 and it was concluded that the internal consistency was high enough. The intraclass correlation coefficient was recognized to be 0.92. Factor analysis indicated three factors in the structure of the work ability including self-perceived work ability (24.5% of the variance), mental resources (22.23% of the variance), and presence of disease and health related limitation (18.55% of the variance). Statistical tests showed that this questionnaire was capable of discriminating two groups of employees who had different levels of sick leave. Criterion validity analysis showed that this instrument and all dimensions of the Iranian version of SF-36 were correlated significantly. Item correlation corrective for overlap showed the items tests had a good correlation except for one. The finding of the study showed that the Iranian version of the WAI is a reliable and valid measure of work ability and can be used both in research and practical activities.

  5. Fine-Scale Skeletal Banding Can Distinguish Symbiotic from Asymbiotic Species among Modern and Fossil Scleractinian Corals.

    PubMed

    Frankowiak, Katarzyna; Kret, Sławomir; Mazur, Maciej; Meibom, Anders; Kitahara, Marcelo V; Stolarski, Jarosław

    2016-01-01

    Understanding the evolution of scleractinian corals on geological timescales is key to predict how modern reef ecosystems will react to changing environmental conditions in the future. Important to such efforts has been the development of several skeleton-based criteria to distinguish between the two major ecological groups of scleractinians: zooxanthellates, which live in symbiosis with dinoflagellate algae, and azooxanthellates, which lack endosymbiotic dinoflagellates. Existing criteria are based on overall skeletal morphology and bio/geo-chemical indicators-none of them being particularly robust. Here we explore another skeletal feature, namely fine-scale growth banding, which differs between these two groups of corals. Using various ultra-structural imaging techniques (e.g., TEM, SEM, and NanoSIMS) we have characterized skeletal growth increments, composed of doublets of optically light and dark bands, in a broad selection of extant symbiotic and asymbiotic corals. Skeletons of zooxanthellate corals are characterized by regular growth banding, whereas in skeletons of azooxanthellate corals the growth banding is irregular. Importantly, the regularity of growth bands can be easily quantified with a coefficient of variation obtained by measuring bandwidths on SEM images of polished and etched skeletal surfaces of septa and/or walls. We find that this coefficient of variation (lower values indicate higher regularity) ranges from ~40 to ~90% in azooxanthellate corals and from ~5 to ~15% in symbiotic species. With more than 90% (28 out of 31) of the studied corals conforming to this microstructural criterion, it represents an easy and robust method to discriminate between zooxanthellate and azooxanthellate corals. This microstructural criterion has been applied to the exceptionally preserved skeleton of the Triassic (Norian, ca. 215 Ma) scleractinian Volzeia sp., which contains the first example of regular, fine-scale banding of thickening deposits in a fossil coral of this age. The regularity of its growth banding strongly suggests that the coral was symbiotic with zooxanthellates.

  6. Fine-Scale Skeletal Banding Can Distinguish Symbiotic from Asymbiotic Species among Modern and Fossil Scleractinian Corals

    PubMed Central

    Frankowiak, Katarzyna; Kret, Sławomir; Mazur, Maciej; Meibom, Anders; Kitahara, Marcelo V.; Stolarski, Jarosław

    2016-01-01

    Understanding the evolution of scleractinian corals on geological timescales is key to predict how modern reef ecosystems will react to changing environmental conditions in the future. Important to such efforts has been the development of several skeleton-based criteria to distinguish between the two major ecological groups of scleractinians: zooxanthellates, which live in symbiosis with dinoflagellate algae, and azooxanthellates, which lack endosymbiotic dinoflagellates. Existing criteria are based on overall skeletal morphology and bio/geo-chemical indicators—none of them being particularly robust. Here we explore another skeletal feature, namely fine-scale growth banding, which differs between these two groups of corals. Using various ultra-structural imaging techniques (e.g., TEM, SEM, and NanoSIMS) we have characterized skeletal growth increments, composed of doublets of optically light and dark bands, in a broad selection of extant symbiotic and asymbiotic corals. Skeletons of zooxanthellate corals are characterized by regular growth banding, whereas in skeletons of azooxanthellate corals the growth banding is irregular. Importantly, the regularity of growth bands can be easily quantified with a coefficient of variation obtained by measuring bandwidths on SEM images of polished and etched skeletal surfaces of septa and/or walls. We find that this coefficient of variation (lower values indicate higher regularity) ranges from ~40 to ~90% in azooxanthellate corals and from ~5 to ~15% in symbiotic species. With more than 90% (28 out of 31) of the studied corals conforming to this microstructural criterion, it represents an easy and robust method to discriminate between zooxanthellate and azooxanthellate corals. This microstructural criterion has been applied to the exceptionally preserved skeleton of the Triassic (Norian, ca. 215 Ma) scleractinian Volzeia sp., which contains the first example of regular, fine-scale banding of thickening deposits in a fossil coral of this age. The regularity of its growth banding strongly suggests that the coral was symbiotic with zooxanthellates. PMID:26751803

  7. A stopping criterion for the iterative solution of partial differential equations

    NASA Astrophysics Data System (ADS)

    Rao, Kaustubh; Malan, Paul; Perot, J. Blair

    2018-01-01

    A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.

  8. New Stopping Criteria for Segmenting DNA Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wentian

    2001-06-18

    We propose a solution on the stopping criterion in segmenting inhomogeneous DNA sequences with complex statistical patterns. This new stopping criterion is based on Bayesian information criterion in the model selection framework. When this criterion is applied to telomere of S.cerevisiae and the complete sequence of E.coli, borders of biologically meaningful units were identified, and a more reasonable number of domains was obtained. We also introduce a measure called segmentation strength which can be used to control the delineation of large domains. The relationship between the average domain size and the threshold of segmentation strength is determined for several genomemore » sequences.« less

  9. [Integral assessment of learning subjects difficulties].

    PubMed

    Grebniak, N P; Shchudro, S A

    2010-01-01

    The integral criterion for subject difficulties in senior classes is substantiated in terms of progress in studies, variation coefficient, and subjective and expert appraisals of the difficulty of subjects. The compiled regression models adequately determine the difficulty of academic subjects. According to the root-mean-square deviation, all subjects were found to have 3 degrees of difficulty.

  10. Modeling Group Differences in OLS and Orthogonal Regression: Implications for Differential Validity Studies

    ERIC Educational Resources Information Center

    Kane, Michael T.; Mroch, Andrew A.

    2010-01-01

    In evaluating the relationship between two measures across different groups (i.e., in evaluating "differential validity") it is necessary to examine differences in correlation coefficients and in regression lines. Ordinary least squares (OLS) regression is the standard method for fitting lines to data, but its criterion for optimal fit…

  11. A Note on the Incremental Validity of Aggregate Predictors.

    ERIC Educational Resources Information Center

    Day, H. D.; Marshall, David

    Three computer simulations were conducted to show that very high aggregate predictive validity coefficients can occur when the across-case variability in absolute score stability occurring in both the predictor and criterion matrices is quite small. In light of the increase in internal consistency reliability achieved by the method of aggregation…

  12. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  13. Home Healthcare Nurses' Job Satisfaction Scale: refinement and psychometric testing.

    PubMed

    Ellenbecker, Carol H; Byleckie, James J

    2005-10-01

    This paper describes a study to further develop and test the psychometric properties of the Home Healthcare Nurses' Job Satisfaction Scale, including reliability and construct and criterion validity. Numerous scales have been developed to measure nurses' job satisfaction. Only one, the Home Healthcare Nurses' Job Satisfaction Scale, has been designed specifically to measure job satisfaction of home healthcare nurses. The Home Healthcare Nurses' Job Satisfaction Scale is based on a theoretical model that integrates the findings of empirical research related to job satisfaction. A convenience sample of 340 home healthcare nurses completed the Home Healthcare Nurses' Job Satisfaction Scale and the Mueller and McCloskey Satisfaction Scale, which was used to test criterion validity. Factor analysis was used for testing and refinement of the theory-based assignment of items to constructs. Reliability was assessed by Cronbach's alpha internal consistency reliability coefficients. The data were collected in 2003. Nine factors contributing to home healthcare nurses' job satisfaction emerged from the factor analysis and were strongly supported by the underlying theory. Factor loadings were all above 0.4. Cronbach's alpha coefficients for each of the nine subscales ranged from 0.64 to 0.83; the alpha for the global scale was 0.89. The correlations between the Home Healthcare Nurses' Job Satisfaction Scale and Mueller and McCloskey Satisfaction Scale was 0.79, indicating good criterion-related validity. The Home Healthcare Nurses' Job Satisfaction Scale has potential as a reliable and valid scale for measurement of job satisfaction of home healthcare nurses.

  14. Evaluation of the influence of the definition of an isolated hip fracture as an exclusion criterion for trauma system benchmarking: a multicenter cohort study.

    PubMed

    Tiao, J; Moore, L; Porgo, T V; Belcaid, A

    2016-06-01

    To assess whether the definition of an IHF used as an exclusion criterion influences the results of trauma center benchmarking. We conducted a multicenter retrospective cohort study with data from an integrated Canadian trauma system. The study population included all patients admitted between 1999 and 2010 to any of the 57 adult trauma centers. Seven definitions of IHF based on diagnostic codes, age, mechanism of injury, and secondary injuries, identified in a systematic review, were used. Trauma centers were benchmarked using risk-adjusted mortality estimates generated using the Trauma Risk Adjustment Model. The agreement between benchmarking results generated under different IHF definitions was evaluated with correlation coefficients on adjusted mortality estimates. Correlation coefficients >0.95 were considered to convey acceptable agreement. The study population consisted of 172,872 patients before exclusion of IHF and between 128,094 and 139,588 patients after exclusion. Correlation coefficients between risk-adjusted mortality estimates generated in populations including and excluding IHF varied between 0.86 and 0.90. Correlation coefficients of estimates generated under different definitions of IHF varied between 0.97 and 0.99, even when analyses were restricted to patients aged ≥65 years. Although the exclusion of patients with IHF has an influence on the results of trauma center benchmarking based on mortality, the definition of IHF in terms of diagnostic codes, age, mechanism of injury and secondary injury has no significant impact on benchmarking results. Results suggest that there is no need to obtain formal consensus on the definition of IHF for benchmarking activities.

  15. A Similarity Criterion for Supersonic Flow Past a Cylinder with a Frontal High-Porosity Cellular Insert

    NASA Astrophysics Data System (ADS)

    Mironov, S. G.; Poplavskaya, T. V.; Kirilovskiy, S. V.; Maslov, A. A.

    2018-03-01

    We have experimentally and numerically studied the influence of the ratio of the diameter of a cylinder with a frontal gas-permeable porous insert made of nickel sponge to the average pore diameter in the insert on the aerodynamic drag of this model body in supersonic airflow ( M ∞ = 4.85, 7, and 21). The analytical dependence of the normalized drag coefficient on a parameter involving the Mach number and the ratio of cylinder radius to average pore radius in the insert is obtained. It is suggested to use this parameter as a similarity criterion in the problem of supersonic airflow past a cylinder with a frontal high-porosity cellular insert.

  16. Reliability and validity of the Daily Cognitive-Communication and Sleep Profile: a new instrument for monitoring sleep, wakefulness and daytime function.

    PubMed

    Fung, Christina Hoi Ling; Nguyen, Michelle; Moineddin, Rahim; Colantonio, Angela; Wiseman-Hakes, Catherine

    2014-06-01

    The Daily Cognitive Communicative and Sleep Profile (DCCASP) is a seven-item instrument that captures daily subjective sleep quality, perceived mood, cognitive, and communication functions. The objective of this study was to evaluate the reliability and validity of the DCCASP. The DCCASP was self-administered daily to a convenience sample of young adults (n = 54) for two two-week blocks, interspersed with a two-week rest period. Afterwards, participants completed the Pittsburgh Sleep Quality Index (PSQI). Internal consistency and criterion validity were calculated by Cronbach's α coefficient, Concordance Correlation Coefficient (CCC), and Spearman rank (rs) correlation coefficient, respectively. Results indicated high internal consistency (Cronbach-s α = 0.864-0.938) among mean ratings of sleep quality on the DCCASP. There were significant correlations between mean ratings of sleep quality and all domains (rs=0.38-0.55, p<0.0001). Criterion validity was established between mean sleep quality ratings on the DCCASP and PSQI (rs=0.40, p<0.001). The DCCASP is a reliable and valid self-report instrument to monitor daily sleep quality and perceived mood, cognitive, and communication functions over time, amongst a normative sample of young adults. Further studies on its psychometric properties are necessary to clarify its utility in a clinical population. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Validity and reliability of the Japanese version of the FIM + FAM in patients with cerebrovascular accident.

    PubMed

    Miki, Emi; Yamane, Shingo; Yamaoka, Mai; Fujii, Hiroe; Ueno, Hiroka; Kawahara, Toshie; Tanaka, Keiko; Tamashiro, Hiroaki; Inoue, Eiji; Okamoto, Takatsugu; Kuriyama, Masaru

    2016-09-01

    The study aim was to investigate the validity and reliability of the Functional Independence Measure and Functional Assessment Measure (FIM + FAM), which is unfamiliar in Japan, by using its Japanese version (FIM + FAM-j) in patients with cerebrovascular accident (CVA). Forty-two CVA patients participated. Criterion validity was examined by correlating the full scale and subscales of FIM + FAM-j with several well-established measurements using Spearman's correlation coefficient. Reliability was evaluated by internal consistency (tested by Cronbach's alpha coefficient) and intra-rater reliability (tested by Kendall's tau correlation coefficient). Good-to-excellent criterion validity was found between the full scale and motor subscales of the FIM + FAM-j and the Barthel Index, National Institutes of Health Stroke Scale, modified Rankin Scale, and lower extremity Brunnstrom Recovery Stage. High internal consistency was observed within the full-scale FIM + FAM-j and the motor and cognitive subscales (Cronbach's alphas were 0.968, 0.954, and 0.948, respectively). Additionally, good intra-rater reliability was observed within the full scale and motor subscales, and excellent reliability for the cognitive subscales (taus were 0.83, 0.80, and 0.98, respectively). This study showed that the FIM + FAM-j demonstrated acceptable levels of validity and reliability when used for CVA as a measure of disability.

  18. What can be learned from optical two-color diffusion and thermodiffusion experiments on ternary fluid mixtures?

    NASA Astrophysics Data System (ADS)

    Gebhardt, M.; Köhler, W.

    2015-02-01

    A number of optical techniques have been developed during the recent years for the investigation of diffusion and thermodiffusion in ternary fluid mixtures, both on ground and on-board the International Space Station. All these methods are based on the simultaneous measurement of refractive index changes at two different wavelengths. Here, we discuss and compare different techniques with the emphasis on optical beam deflection (OBD), optical digital interferometry, and thermal diffusion forced Rayleigh scattering (TDFRS). We suggest to formally split the data evaluation into a phenomenological parameterization of the measured transients and a subsequent transformation from the refractive index into the concentration space. In all experiments, the transients measured at two different detection wavelengths can be described by four amplitudes and two eigenvalues of the diffusion coefficient matrix. It turns out that these six parameters are subjected to large errors and cannot be determined reliably. Five good quantities, which can be determined with a high accuracy, are the stationary amplitudes, the initial slopes as defined in TDFRS experiments and by application of a heuristic criterion for similar curves, a certain mean diffusion coefficient. These amplitudes and slopes are directly linked to the Soret and thermodiffusion coefficients after transformation with the inverse contrast factor matrix, which is frequently ill-conditioned. Since only five out of six free parameters are reliably determined, including the single mean diffusion coefficient, the determination of the four entries of the diffusion matrix is not possible. We apply our results to new OBD measurements of the symmetric (mass fractions 0.33/0.33/0.33) ternary benchmark mixture n-dodecane/isobutylbenzene/1,2,3,4-tetrahydronaphthalene and existing literature data for the same system.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gebhardt, M.; Köhler, W., E-mail: werner.koehler@uni-bayreuth.de

    A number of optical techniques have been developed during the recent years for the investigation of diffusion and thermodiffusion in ternary fluid mixtures, both on ground and on-board the International Space Station. All these methods are based on the simultaneous measurement of refractive index changes at two different wavelengths. Here, we discuss and compare different techniques with the emphasis on optical beam deflection (OBD), optical digital interferometry, and thermal diffusion forced Rayleigh scattering (TDFRS). We suggest to formally split the data evaluation into a phenomenological parameterization of the measured transients and a subsequent transformation from the refractive index into themore » concentration space. In all experiments, the transients measured at two different detection wavelengths can be described by four amplitudes and two eigenvalues of the diffusion coefficient matrix. It turns out that these six parameters are subjected to large errors and cannot be determined reliably. Five good quantities, which can be determined with a high accuracy, are the stationary amplitudes, the initial slopes as defined in TDFRS experiments and by application of a heuristic criterion for similar curves, a certain mean diffusion coefficient. These amplitudes and slopes are directly linked to the Soret and thermodiffusion coefficients after transformation with the inverse contrast factor matrix, which is frequently ill-conditioned. Since only five out of six free parameters are reliably determined, including the single mean diffusion coefficient, the determination of the four entries of the diffusion matrix is not possible. We apply our results to new OBD measurements of the symmetric (mass fractions 0.33/0.33/0.33) ternary benchmark mixture n-dodecane/isobutylbenzene/1,2,3,4-tetrahydronaphthalene and existing literature data for the same system.« less

  20. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  1. The near optimality of the stabilizing control in a weakly nonlinear system with state-dependent coefficients

    NASA Astrophysics Data System (ADS)

    Dmitriev, Mikhail G.; Makarov, Dmitry A.

    2016-08-01

    We carried out analysis of near optimality of one computationally effective nonlinear stabilizing control built for weakly nonlinear systems with coefficients depending on the state and the formal small parameter. First investigation of that problem was made in [M. G. Dmitriev, and D. A. Makarov, "The suboptimality of stabilizing regulator in a quasi-linear system with state-depended coefficients," in 2016 International Siberian Conference on Control and Communications (SIBCON) Proceedings, National Research University, Moscow, 2016]. In this paper, another optimal control and gain matrix representations were used and theoretical results analogous to cited work above were obtained. Also as in the cited work above the form of quality criterion on which this close-loop control is optimal was constructed.

  2. A hedging point strategy--balancing effluent quality, economy and robustness in the control of wastewater treatment plants.

    PubMed

    Ingildsen, P; Olsson, G; Yuan, Z

    2002-01-01

    An operational space map is an efficient tool to compare a large number of operational strategies to find an optimal choice of setpoints based on a multicriterion. Typically, such a multicriterion includes a weighted sum of cost of operation and effluent quality. Due to the relative high cost of aeration such a definition of optimality result in a relatively high fraction of the effluent total nitrogen in the form of ammonium. Such a strategy may however introduce a risk into operation because a low degree of ammonium removal leads to a low amount of nitrifiers. This in turn leads to a reduced ability to reject event disturbances, such as large variations in the ammonium load, drop in temperature, the presence of toxic/inhibitory compounds in the influent etc. Hedging is a risk minimisation tool, with the aim to "reduce one's risk of loss on a bet or speculation by compensating transactions on the other side" (The Concise Oxford Dictionary (1995)). In wastewater treatment plant operation hedging can be applied by choosing a higher level of ammonium removal to increase the amount of nitrifiers. This is a sensible way to introduce disturbance rejection ability into the multi criterion. In practice, this is done by deciding upon an internal effluent ammonium criterion. In some countries such as Germany, a separate criterion already applies to the level of ammonium in the effluent. However, in most countries the effluent criterion applies to total nitrogen only. In these cases, an internal effluent ammonium criterion should be selected in order to secure proper disturbance rejection ability.

  3. Non-equilibrium diffusion combustion of a fuel droplet

    NASA Astrophysics Data System (ADS)

    Tyurenkova, Veronika V.

    2012-06-01

    A mathematical model for the non-equilibrium combustion of droplets in rocket engines is developed. This model allows to determine the divergence of combustion rate for the equilibrium and non-equilibrium model. Criterion for droplet combustion deviation from equilibrium is introduced. It grows decreasing droplet radius, accommodation coefficient, temperature and decreases on decreasing diffusion coefficient. Also divergence from equilibrium increases on reduction of droplet radius. Droplet burning time essentially increases under non-equilibrium conditions. Comparison of theoretical and experimental data shows that to have adequate solution for small droplets it is necessary to use the non-equilibrium model.

  4. Experimental study of Cu-water nanofluid forced convective flow inside a louvered channel

    NASA Astrophysics Data System (ADS)

    Khoshvaght-Aliabadi, M.; Hormozi, F.; Zamzamian, A.

    2015-03-01

    Heat transfer enhancement plays a very important role for energy saving in plate-fin heat exchangers. In the present study, the influences of simultaneous utilization of a louvered plate-fin channel and copper-base deionized water nanofluid on performance of these exchangers are experimentally explored. The effects of flow rate (2-5 l/min) and nanoparticles weight fraction (0-0.4 %) on heat transfer and pressure drop characteristics are determined. Experimental results indicate that the use of louvered channel instead of the plain one can improve the heat transfer performance. Likewise, addition of small amounts of copper nanoparticles to the base fluid augments the convective heat transfer coefficient remarkably. The maximum rise of 21.7 % in the convective heat transfer coefficient is observed for the 0.4 % wt nanofluid compared to the base fluid. Also, pumping power for the base fluid and nanofluids are calculated based on the measured pressure drop in the louvered channel. The average increase in pumping power is 11.8 % for the nanofluid with 0.4 % wt compared to the base fluid. Applied performance criterion shows a maximum performance index of 1.167 for the nanofluid with 0.1 % wt Finally, two correlations are proposed for Nusselt number and friction factor which fit the experimental data with in ±10 %.

  5. Landing flying qualities evaluation criteria for augmented aircraft

    NASA Technical Reports Server (NTRS)

    Radford, R. C.; Smith, R.; Bailey, R.

    1980-01-01

    The criteria evaluated were: Calspan Neal-Smith; Onstott (Northrop Time Domain); McDonnell-Douglas Equivalent System Approach; R. H. Smith Criterion. Each criterion was applied to the same set of longitudinal approach and landing flying qualities data. A revised version of the Neal-Smith criterion which is applicable to the landing task was developed and tested against other landing flying qualities data. Results indicated that both the revised Neal-Smith criterion and the Equivalent System Approach are good discriminators of pitch landing flying qualities; Neal-Smith has particular merit as a design guide, while the Equivalent System Approach is well suited for development of appropriate military specification requirements applicable to highly augmented aircraft.

  6. Development and psychometric testing of the Cancer Knowledge Scale for Elders.

    PubMed

    Su, Ching-Ching; Chen, Yuh-Min; Kuo, Bo-Jein

    2009-03-01

    To develop the Cancer Knowledge Scale for Elders and test its validity and reliability. The number of elders suffering from cancer is increasing. To facilitate cancer prevention behaviours among elders, they shall be educated about cancer-related knowledge. Prior to designing a programme that would respond to the special needs of elders, understanding the cancer-related knowledge within this population was necessary. However, extensive review of the literature revealed a lack of appropriate instruments for measuring cancer-related knowledge. A valid and reliable cancer knowledge scale for elders is necessary. A non-experimental methodological design was used to test the psychometric properties of the Cancer Knowledge Scale for Elders. Item analysis was first performed to screen out items that had low corrected item-total correlation coefficients. Construct validity was examined with a principle component method of exploratory factor analysis. Cancer-related health behaviour was used as the criterion variable to evaluate criterion-related validity. Internal consistency reliability was assessed by the KR-20. Stability was determined by two-week test-retest reliability. The factor analysis yielded a four-factor solution accounting for 49.5% of the variance. For criterion-related validity, cancer knowledge was positively correlated with cancer-related health behaviour (r = 0.78, p < 0.001). The KR-20 coefficients of each factor were 0.85, 0.76, 0.79 and 0.67 and 0.87 for the total scale. Test-retest reliability over a two-week period was 0.83 (p < 0.001). This study provides evidence for content validity, construct validity, criterion-related validity, internal consistency and stability of the Cancer Knowledge Scale for Elders. The results show that this scale is an easy-to-use instrument for elders and has adequate validity and reliability. The scale can be used as an assessment instrument when implementing cancer education programmes for elders. It can also be used to evaluate the effects of education programmes.

  7. On possible parent bodies of Innisfree, Lost City and Prgibram meteorites.

    NASA Astrophysics Data System (ADS)

    Rozaev, A. E.

    1994-12-01

    Minor planets 1981 ET3 and Seleucus are possible parent bodies of Innisfree and Lost City meteorites, asteroid Mithra is the most probable source of Prgibram meteorite. The conclusions are based on the Southworth - Hawkins criterion with taking into account of the motion constants (Tisserand coefficient, etc.) and minimal distances between orbits at present time.

  8. Comparison of the Incremental Validity of the Old and New MCAT.

    ERIC Educational Resources Information Center

    Wolf, Fredric M.; And Others

    The predictive and incremental validity of both the Old and New Medical College Admission Test (MCAT) was examined and compared with a sample of over 300 medical students. Results of zero order and incremental validity coefficients, as well as prediction models resulting from all possible subsets regression analyses using Mallow's Cp criterion,…

  9. Estimation of the Invariance of Factor Structures Across Sex and Race with Implications for Hypothesis Testing

    ERIC Educational Resources Information Center

    Katzenmeyer, W. G.; Stenner, A. Jackson

    1977-01-01

    The problem of demonstrating invariance of factor structures across criterion groups is addressed. Procedures are outlined which combine the replication of factor structures across sex-race groups with use of the coefficient of invariance to demonstrate the level of invariance associated with factors identified in a self concept measure.…

  10. Subscores and Validity. Research Report. ETS RR-08-64

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2008-01-01

    In educational testing, subscores may be provided based on a portion of the items from a larger test. One consideration in evaluation of such subscores is their ability to predict a criterion score. Two limitations on prediction exist. The first, which is well known, is that the coefficient of determination for linear prediction of the criterion…

  11. Concurrent Validity of K-BIT Using the WISC-III as the Criterion.

    ERIC Educational Resources Information Center

    Seagle, Donna L.; Rust, James O.

    The Kaufman Brief Intelligence Test (K-BIT) was used as a screening instrument to predict Wechsler Intelligence Scale for Children-Third Edition (WISC-III) scores of 94 students referred for psychoeducational evaluations. Although the correlation coefficient between the K-BIT IQ Composite and the WISC-III Full Scale IQ was 0.771 for the entire…

  12. Examining the Reliability of Interval Level Data Using Root Mean Square Differences and Concordance Correlation Coefficients

    ERIC Educational Resources Information Center

    Barchard, Kimberly A.

    2012-01-01

    This article introduces new statistics for evaluating score consistency. Psychologists usually use correlations to measure the degree of linear relationship between 2 sets of scores, ignoring differences in means and standard deviations. In medicine, biology, chemistry, and physics, a more stringent criterion is often used: the extent to which…

  13. From plastic to gold: a unified classification scheme for reference standards in medical image processing

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.

    2002-05-01

    Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.

  14. [Mokken scaling of the Cognitive Screening Test].

    PubMed

    Diesfeldt, H F A

    2009-10-01

    The Cognitive Screening Test (CST) is a twenty-item orientation questionnaire in Dutch, that is commonly used to evaluate cognitive impairment. This study applied Mokken Scale Analysis, a non-parametric set of techniques derived from item response theory (IRT), to CST-data of 466 consecutive participants in psychogeriatric day care. The full item set and the standard short version of fourteen items both met the assumptions of the monotone homogeneity model, with scalability coefficient H = 0.39, which is considered weak. In order to select items that would fulfil the assumption of invariant item ordering or the double monotonicity model, the subjects were randomly partitioned into a training set (50% of the sample) and a test set (the remaining half). By means of an automated item selection eleven items were found to measure one latent trait, with H = 0.67 and item H coefficients larger than 0.51. Cross-validation of the item analysis in the remaining half of the subjects gave comparable values (H = 0.66; item H coefficients larger than 0.56). The selected items involve year, place of residence, birth date, the monarch's and prime minister's names, and their predecessors. Applying optimal discriminant analysis (ODA) it was found that the full set of twenty CST items performed best in distinguishing two predefined groups of patients of lower or higher cognitive ability, as established by an independent criterion derived from the Amsterdam Dementia Screening Test. The chance corrected predictive value or prognostic utility was 47.5% for the full item set, 45.2% for the fourteen items of the standard short version of the CST, and 46.1% for the homogeneous, unidimensional set of selected eleven items. The results of the item analysis support the application of the CST in cognitive assessment, and revealed a more reliable 'short' version of the CST than the standard short version (CST14).

  15. Coefficient of performance and its bounds with the figure of merit for a general refrigerator

    NASA Astrophysics Data System (ADS)

    Long, Rui; Liu, Wei

    2015-02-01

    A general refrigerator model with non-isothermal processes is studied. The coefficient of performance (COP) and its bounds at maximum χ figure of merit are obtained and analyzed. This model accounts for different heat capacities during the heat transfer processes. So, different kinds of refrigerator cycles can be considered. Under the constant heat capacity condition, the upper bound of the COP is the Curzon-Ahlborn (CA) coefficient of performance and is independent of the time durations of the heat exchanging processes. With the maximum χ criterion, in the refrigerator cycles, such as the reversed Brayton refrigerator cycle, the reversed Otto refrigerator cycle and the reversed Atkinson refrigerator cycle, where the heat capacity in the heat absorbing process is not less than that in the heat releasing process, their COPs are bounded by the CA coefficient of performance; otherwise, such as for the reversed Diesel refrigerator cycle, its COP can exceed the CA coefficient of performance. Furthermore, the general refined upper and lower bounds have been proposed.

  16. Evaluation of the Gini Coefficient in Spatial Scan Statistics for Detecting Irregularly Shaped Clusters

    PubMed Central

    Kim, Jiyu; Jung, Inkyung

    2017-01-01

    Spatial scan statistics with circular or elliptic scanning windows are commonly used for cluster detection in various applications, such as the identification of geographical disease clusters from epidemiological data. It has been pointed out that the method may have difficulty in correctly identifying non-compact, arbitrarily shaped clusters. In this paper, we evaluated the Gini coefficient for detecting irregularly shaped clusters through a simulation study. The Gini coefficient, the use of which in spatial scan statistics was recently proposed, is a criterion measure for optimizing the maximum reported cluster size. Our simulation study results showed that using the Gini coefficient works better than the original spatial scan statistic for identifying irregularly shaped clusters, by reporting an optimized and refined collection of clusters rather than a single larger cluster. We have provided a real data example that seems to support the simulation results. We think that using the Gini coefficient in spatial scan statistics can be helpful for the detection of irregularly shaped clusters. PMID:28129368

  17. Reliability of pulse waveform separation analysis: effects of posture and fasting.

    PubMed

    Stoner, Lee; Credeur, Daniel; Fryer, Simon; Faulkner, James; Lambrick, Danielle; Gibbs, Bethany Barone

    2017-03-01

    Oscillometric pulse wave analysis devices enable, with relative simplicity and objectivity, the measurement of central hemodynamic parameters. The important parameters are central blood pressures and indices of arterial wave reflection, including wave separation analysis (backward pressure component Pb and reflection magnitude). This study sought to determine whether the measurement precision (between-day reliability) of Pb and reflection magnitude: exceeds the criterion for acceptable reliability; and is affected by posture (supine, seated) and fasting state. Twenty healthy adults (50% female, 27.9 years, 24.2 kg/m) were tested on six different mornings: 3 days fasted, 3 days nonfasted condition. On each occasion, participants were tested in supine and seated postures. Oscillometric pressure waveforms were recorded on the left upper arm. The criterion intra-class correlation coefficient value of 0.75 was exceeded for Pb (0.76) and reflection magnitude (0.77) when participants were assessed under the combined supine-fasted condition. The intra-class correlation coefficient was lowest for Pb in seated-nonfasted condition (0.57), and lowest for reflection magnitude in the seated-fasted condition (0.56). For Pb, the smallest detectible change that must be exceeded in order for a significant change to occur in an individual was 2.5 mmHg, and for reflection magnitude, the smallest detectable change was 8.5%. Assessments of Pb and reflection magnitude are as follows: exceed the criterion for acceptable reliability; and are most reliable when participants are fasted in a supine position. The demonstrated reliability suggests sufficient precision to detect clinically meaningful changes in reflection magnitude and Pb.

  18. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires.

    PubMed

    Helmerhorst, Hendrik J F; Brage, Søren; Warren, Janet; Besson, Herve; Ekelund, Ulf

    2012-08-31

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs.A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible.In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62-0.71 for existing, and 0.74-0.76 for new PAQs. Median validity coefficients ranged from 0.30-0.39 for existing, and from 0.25-0.41 for new PAQs.Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument.

  19. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires

    PubMed Central

    2012-01-01

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs. A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible. In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62–0.71 for existing, and 0.74–0.76 for new PAQs. Median validity coefficients ranged from 0.30–0.39 for existing, and from 0.25–0.41 for new PAQs. Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument. PMID:22938557

  20. Updating the Trainability Tests Literature on Black-White Subgroup Differences and Reconsidering Criterion-Related Validity

    ERIC Educational Resources Information Center

    Roth, Philip L.; Buster, Maury A.; Bobko, Philip

    2011-01-01

    A number of applied psychologists have suggested that trainability test Black-White ethnic group differences are low or relatively low (e.g., Siegel & Bergman, 1975), though data are scarce. Likewise, there are relatively few estimates of criterion-related validity for trainability tests predicting job performance (cf. Robertson & Downs,…

  1. Development and Criterion Validity of Differentiated and Elevated Vocational Interests in Adolescence

    ERIC Educational Resources Information Center

    Hirschi, Andreas

    2009-01-01

    Interest differentiation and elevation are supposed to provide important information about a person's state of interest development, yet little is known about their development and criterion validity. The present study explored these constructs among a group of Swiss adolescents. Study 1 applied a cross-sectional design with 210 students in 11th…

  2. What Is True Halving in the Payoff Matrix of Game Theory?

    PubMed Central

    Hasegawa, Eisuke; Yoshimura, Jin

    2016-01-01

    In game theory, there are two social interpretations of rewards (payoffs) for decision-making strategies: (1) the interpretation based on the utility criterion derived from expected utility theory and (2) the interpretation based on the quantitative criterion (amount of gain) derived from validity in the empirical context. A dynamic decision theory has recently been developed in which dynamic utility is a conditional (state) variable that is a function of the current wealth of a decision maker. We applied dynamic utility to the equal division in dove-dove contests in the hawk-dove game. Our results indicate that under the utility criterion, the half-share of utility becomes proportional to a player’s current wealth. Our results are consistent with studies of the sense of fairness in animals, which indicate that the quantitative criterion has greater validity than the utility criterion. We also find that traditional analyses of repeated games must be reevaluated. PMID:27487194

  3. What Is True Halving in the Payoff Matrix of Game Theory?

    PubMed

    Ito, Hiromu; Katsumata, Yuki; Hasegawa, Eisuke; Yoshimura, Jin

    2016-01-01

    In game theory, there are two social interpretations of rewards (payoffs) for decision-making strategies: (1) the interpretation based on the utility criterion derived from expected utility theory and (2) the interpretation based on the quantitative criterion (amount of gain) derived from validity in the empirical context. A dynamic decision theory has recently been developed in which dynamic utility is a conditional (state) variable that is a function of the current wealth of a decision maker. We applied dynamic utility to the equal division in dove-dove contests in the hawk-dove game. Our results indicate that under the utility criterion, the half-share of utility becomes proportional to a player's current wealth. Our results are consistent with studies of the sense of fairness in animals, which indicate that the quantitative criterion has greater validity than the utility criterion. We also find that traditional analyses of repeated games must be reevaluated.

  4. Validation of the Internet Gaming Disorder Scale - Short-Form (IGDS9-SF) in an Italian-speaking sample.

    PubMed

    Monacis, Lucia; Palo, Valeria de; Griffiths, Mark D; Sinatra, Maria

    2016-12-01

    Background and aims The inclusion of Internet Gaming Disorder (IGD) in Section III of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders has increased the interest of researchers in the development of new standardized psychometric tools for the assessment of such a disorder. To date, the nine-item Internet Gaming Disorder Scale - Short-Form (IGDS9-SF) has only been validated in English, Portuguese, and Slovenian languages. Therefore, the aim of this investigation was to examine the psychometric properties of the IGDS9-SF in an Italian-speaking sample. Methods A total of 757 participants were recruited to the present study. Confirmatory factor analysis and multi-group analyses were applied to assess the construct validity. Reliability analyses comprised the average variance extracted, the standard error of measurement, and the factor determinacy coefficient. Convergent and criterion validities were established through the associations with other related constructs. The receiver operating characteristic curve analysis was used to determine an empirical cut-off point. Results Findings confirmed the single-factor structure of the instrument, its measurement invariance at the configural level, and the convergent and criterion validities. Satisfactory levels of reliability and a cut-off point of 21 were obtained. Discussion and conclusions The present study provides validity evidence for the use of the Italian version of the IGDS9-SF and may foster research into gaming addiction in the Italian context.

  5. Validation of the Internet Gaming Disorder Scale – Short-Form (IGDS9-SF) in an Italian-speaking sample

    PubMed Central

    Monacis, Lucia; de Palo, Valeria; Griffiths, Mark D.; Sinatra, Maria

    2016-01-01

    Background and aims The inclusion of Internet Gaming Disorder (IGD) in Section III of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders has increased the interest of researchers in the development of new standardized psychometric tools for the assessment of such a disorder. To date, the nine-item Internet Gaming Disorder Scale – Short-Form (IGDS9-SF) has only been validated in English, Portuguese, and Slovenian languages. Therefore, the aim of this investigation was to examine the psychometric properties of the IGDS9-SF in an Italian-speaking sample. Methods A total of 757 participants were recruited to the present study. Confirmatory factor analysis and multi-group analyses were applied to assess the construct validity. Reliability analyses comprised the average variance extracted, the standard error of measurement, and the factor determinacy coefficient. Convergent and criterion validities were established through the associations with other related constructs. The receiver operating characteristic curve analysis was used to determine an empirical cut-off point. Results Findings confirmed the single-factor structure of the instrument, its measurement invariance at the configural level, and the convergent and criterion validities. Satisfactory levels of reliability and a cut-off point of 21 were obtained. Discussion and conclusions The present study provides validity evidence for the use of the Italian version of the IGDS9-SF and may foster research into gaming addiction in the Italian context. PMID:27876422

  6. Reliability and criterion validity of measurements using a smart phone-based measurement tool for the transverse rotation angle of the pelvis during single-leg lifting.

    PubMed

    Jung, Sung-Hoon; Kwon, Oh-Yun; Jeon, In-Cheol; Hwang, Ui-Jae; Weon, Jong-Hyuck

    2018-01-01

    The purposes of this study were to determine the intra-rater test-retest reliability of a smart phone-based measurement tool (SBMT) and a three-dimensional (3D) motion analysis system for measuring the transverse rotation angle of the pelvis during single-leg lifting (SLL) and the criterion validity of the transverse rotation angle of the pelvis measurement using SBMT compared with a 3D motion analysis system (3DMAS). Seventeen healthy volunteers performed SLL with their dominant leg without bending the knee until they reached a target placed 20 cm above the table. This study used a 3DMAS, considered the gold standard, to measure the transverse rotation angle of the pelvis to assess the criterion validity of the SBMT measurement. Intra-rater test-retest reliability was determined using the SBMT and 3DMAS using intra-class correlation coefficient (ICC) [3,1] values. The criterion validity of the SBMT was assessed with ICC [3,1] values. Both the 3DMAS (ICC = 0.77) and SBMT (ICC = 0.83) showed excellent intra-rater test-retest reliability in the measurement of the transverse rotation angle of the pelvis during SLL in a supine position. Moreover, the SBMT showed an excellent correlation with the 3DMAS (ICC = 0.99). Measurement of the transverse rotation angle of the pelvis using the SBMT showed excellent reliability and criterion validity compared with the 3DMAS.

  7. The Geometry of Enhancement in Multiple Regression

    ERIC Educational Resources Information Center

    Waller, Niels G.

    2011-01-01

    In linear multiple regression, "enhancement" is said to occur when R[superscript 2] = b[prime]r greater than r[prime]r, where b is a p x 1 vector of standardized regression coefficients and r is a p x 1 vector of correlations between a criterion y and a set of standardized regressors, x. When p = 1 then b [is congruent to] r and…

  8. Validity, Responsiveness, Minimal Detectable Change, and Minimal Clinically Important Change of "Pediatric Balance Scale" in Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Chen, Chia-ling; Shen, I-hsuan; Chen, Chung-yao; Wu, Ching-yi; Liu, Wen-Yu; Chung, Chia-ying

    2013-01-01

    This study examined criterion-related validity and clinimetric properties of the pediatric balance scale ("PBS") in children with cerebral palsy (CP). Forty-five children with CP (age range: 19-77 months) and their parents participated in this study. At baseline and at follow up, Pearson correlation coefficients were used to determine…

  9. [Cardiorespiratory fitness and cardiometabolic risk in young adults].

    PubMed

    Secchi, Jeremías D; García, Gastón C

    2013-01-01

    The assessment of VO₂max allow classify subjects according to the health risk. However the factors that may affect the classifications have been little studied. The main purpose was to determine whether the type of VO₂max prediction equation and the Fitnessgram criterion-referenced standards modified the proportion of young adults classified with a level of aerobic capacity cardiometabolic risk indicative. The study design was observational, cross-sectional and relational. Young adults (n= 240) participated voluntarily. The VO₂max was estimated by 20-m shuttle run test applying 9 predictive equations. The differences in the classifications were analyzed with the Cochran Q and McNemar tests. The level of aerobic capacity indicative of cardiometabolic risk ranged between 7.1% and 70.4% depending on the criterion-referenced standards and predictive equation used (p<0.001). A higher percentage of women were classified with an unhealthy level in all equations (women: 29.4% to 85.3% vs 4.8% to 51% in men), regardless of the criterion-referenced standards (p<0.001). Both sexes and irrespective of the equation applied the old criterion-referenced standards classified a lower proportion of subjects (men: 4.8% to 48.1% and women: 39.4% a 68.4%) with unhealthy aerobic capacity (p ≤ 0.004). The type of VO₂max prediction equation and Fitnessgram criterion-referenced standards changed classifications young adults with a level of aerobic capacity of cardiometabolic risk indicative.

  10. Estimation of median growth curves for children up two years old based on biresponse local linear estimator

    NASA Astrophysics Data System (ADS)

    Chamidah, Nur; Rifada, Marisa

    2016-03-01

    There is significant of the coeficient correlation between weight and height of the children. Therefore, the simultaneous model estimation is better than partial single response approach. In this study we investigate the pattern of sex difference in growth curve of children from birth up to two years of age in Surabaya, Indonesia based on biresponse model. The data was collected in a longitudinal representative sample of the Surabaya population of healthy children that consists of two response variables i.e. weight (kg) and height (cm). While a predictor variable is age (month). Based on generalized cross validation criterion, the modeling result based on biresponse model by using local linear estimator for boy and girl growth curve gives optimal bandwidth i.e 1.41 and 1.56 and the determination coefficient (R2) i.e. 99.99% and 99.98%,.respectively. Both boy and girl curves satisfy the goodness of fit criterion i.e..the determination coefficient tends to one. Also, there is difference pattern of growth curve between boy and girl. The boy median growth curves is higher than those of girl curve.

  11. The consequences of ignoring measurement invariance for path coefficients in structural equation models

    PubMed Central

    Guenole, Nigel; Brown, Anna

    2014-01-01

    We report a Monte Carlo study examining the effects of two strategies for handling measurement non-invariance – modeling and ignoring non-invariant items – on structural regression coefficients between latent variables measured with item response theory models for categorical indicators. These strategies were examined across four levels and three types of non-invariance – non-invariant loadings, non-invariant thresholds, and combined non-invariance on loadings and thresholds – in simple, partial, mediated and moderated regression models where the non-invariant latent variable occupied predictor, mediator, and criterion positions in the structural regression models. When non-invariance is ignored in the latent predictor, the focal group regression parameters are biased in the opposite direction to the difference in loadings and thresholds relative to the referent group (i.e., lower loadings and thresholds for the focal group lead to overestimated regression parameters). With criterion non-invariance, the focal group regression parameters are biased in the same direction as the difference in loadings and thresholds relative to the referent group. While unacceptable levels of parameter bias were confined to the focal group, bias occurred at considerably lower levels of ignored non-invariance than was previously recognized in referent and focal groups. PMID:25278911

  12. Diffusion modelling of metamorphic layered coronas with stability criterion and consideration of affinity

    NASA Astrophysics Data System (ADS)

    Ashworth, J. R.; Sheplev, V. S.

    1997-09-01

    Layered coronas between two reactant minerals can, in many cases, be attributed to diffusion-controlled growth with local equilibrium. This paper clarifies and unifies the previous approaches of various authors to the simplest form of modelling, which uses no assumed values for thermochemical quantities. A realistic overall reaction must be estimated from measured overall proportions of minerals and their major element compositions. Modelling is not restricted to a particular number of components S, relative to the number of phases Φ. IfΦ > S + 1, the overall reaction is a combination of simultaneous reactions. The stepwise method, solving for the local reaction at each boundary in turn, is extended to allow for recurrence of a mineral (its presence in two parts of the layer structure separated by a gap). The equations are also given in matrix form. A thermodynamic stability criterion is derived, determining which layer sequence is truly stable if several are computable from the same inputs. A layer structure satisfying the stability criterion has greater growth rate (and greater rate of entropy production) than the other computable layer sequences. This criterion of greatest entropy production is distinct from Prigogine's theorem of minimum entropy production, which distinguishes the stationary or quasi-stationary state from other states of the same layer sequence. The criterion leads to modification of previous results for coronas comprising hornblende, spinel, and orthopyroxene between olivine (Ol) and plagioclase (Pl). The outcome supports the previous inference that Si, and particularly Al, commonly behave as immobile relative to other cation-forming major elements. The affinity (-ΔG) of a corona-forming reaction is estimated, using previous estimates of diffusion coefficient and the duration t of reaction, together with a new model quantity (-ΔG) *. For an example of the Ol + Pl reaction, a rough calculation gives (-ΔG) > 1.7RT (per mole of P1 consumed, based on a 24-oxygen formula for Pl). At 600-700°C, this represents (-ΔG) > 10kJ mol -1 and departure from equilibrium temperature by at least ˜ 100°C. The lower end of this range is petrologically reasonable and, for t < 100Ma, corresponds to a Fick's-law diffusion coefficient for Al, DAl > 10 -25m 2s -1, larger than expected for lattice diffusion but consistent with fluid-absent grain-boundary diffusion and small concentration gradients.

  13. A new scale for the assessment of performance and capacity of hand function in children with hemiplegic cerebral palsy: reliability and validity studies.

    PubMed

    Rosa-Rizzotto, M; Visonà Dalla Pozza, L; Corlatti, A; Luparia, A; Marchi, A; Molteni, F; Facchin, P; Pagliano, E; Fedrizzi, E

    2014-10-01

    In hemiplegic children, the recognition of the activity limitation pattern and the possibility of grading its severity are relevant for clinicians while planning interventions, monitoring results, predicting outcomes. Aim of the study is to examine the reliability and validity of Besta Scale, an instrument used to measure in hemiplegic children from 18 months to 12 years of age both grasp on request (capacity) and spontaneous use of upper limb (performance) in bimanual play activities and in ADL. Psychometric analysis of reliability and of validity of the Besta scale was performed. Outpatient study sample Reliability study: A sample of 39 patients was enrolled. The administration of Besta scale was video-recorded in a standardized manner. All videos were scored by 20 independent raters on subsequent viewing. 3 raters randomly selected from the 20-raters group rescored the same video two years later for intra-rater reliability. Intra and inter-rater reliability were calculated using Intraclass Correlation Coefficient (ICC) and Kendall's coefficient (K), respectively. Internal consistency reliability was assessed using Alpha's Chronbach coefficient. Validity study: a sample of 105 children was assessed 5 times (at t0 and 2, 3, 6 and 12 months later) by 20 independent raters. Each patient underwent at the same time to QUEST and Besta scale administration and assessment. Criterion validity was calculated using rho-Pearson coefficient. Reliability study: The inter-rater reliability calculated with Kendall's coefficient resulted moderate K=0.47. The intra-rater (or test-retest) reliability for 3 raters was excellent (ICC=0.927). The Cronbach's alpha for internal consistency was 0.972. Validity study: Besta scale showed a good criterion validity compared to QUEST increasing by age and severity of impairment. Rho Pearson's correlation coefficient r was 0.81 (P<0.0001). Limitations. Besta scales in infants finds hard to distinguish between mild to moderately impaired hand function. Besta scale scoring system is a valid and reliable tool, utilizable in a clinical setting to monitor evolution of unimanual and bimanual manipulation and to distinguish hand's capacity from performance.

  14. Reliability and Validity of a Japanese-language and Culturally Adapted Version of the Musculoskeletal Tumor Society Scoring System for the Lower Extremity.

    PubMed

    Iwata, Shintaro; Uehara, Kosuke; Ogura, Koichi; Akiyama, Toru; Shinoda, Yusuke; Yonemoto, Tsukasa; Kawai, Akira

    2016-09-01

    The Musculoskeletal Tumor Society (MSTS) scoring system is a widely used functional evaluation tool for patients treated for musculoskeletal tumors. Although the MSTS scoring system has been validated in English and Brazilian Portuguese, a Japanese version of the MSTS scoring system has not yet been validated. We sought to determine whether a Japanese-language translation of the MSTS scoring system for the lower extremity had (1) sufficient reliability and internal consistency, (2) adequate construct validity, and (3) reasonable criterion validity compared with the Toronto Extremity Salvage Score (TESS) and SF-36 using psychometric analysis. The Japanese version of the MSTS scoring system was developed using accepted guidelines, which included translation of the English version of the MSTS into Japanese by five native Japanese bilingual musculoskeletal oncology surgeons and integrated into one document. One hundred patients with a diagnosis of intermediate or malignant bone or soft tissue tumors located in the lower extremity and who had undergone tumor resection with or without reconstruction or amputation participated in this study. Reliability was evaluated by test-retest analysis, and internal consistency was established by Cronbach's alpha coefficient. Construct validity was evaluated using the principal factor analysis and Akaike information criterion network. Criterion validity was evaluated by comparing the MSTS scoring system with the TESS and SF-36. Test-retest analysis showed a high intraclass correlation coefficient (0.92; 95% CI, 0.88-0.95), indicating high reliability of the Japanese version of the MSTS scoring system, although a considerable ceiling effect was observed, with 23 patients (23%) given the maximum score. Cronbach's alpha coefficient was 0.87 (95% CI, 0.82-0.90), suggesting a high level of internal consistency. Factor analysis revealed that all items had high loading values and communalities; we identified a central role for the items "walking" and "gait" according to the Akaike information criterion network. The total MSTS score was correlated with that of the TESS (r = 0.81; 95% CI, 0.73-0.87; p < 0.001) and the physical component summary and physical functioning of the SF-36. The Japanese-language translation of the MSTS scoring system for the lower extremity has sufficient reliability and reasonable validity. Nevertheless, the observation of a ceiling effect suggests poor ability of this system to discriminate from among patients who have a high level of function.

  15. A Case for Transforming the Criterion of a Predictive Validity Study

    ERIC Educational Resources Information Center

    Patterson, Brian F.; Kobrin, Jennifer L.

    2011-01-01

    This study presents a case for applying a transformation (Box and Cox, 1964) of the criterion used in predictive validity studies. The goals of the transformation were to better meet the assumptions of the linear regression model and to reduce the residual variance of fitted (i.e., predicted) values. Using data for the 2008 cohort of first-time,…

  16. Approach to numerical safety guidelines based on a core melt criterion. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azarm, M.A.; Hall, R.E.

    1982-01-01

    A plausible approach is proposed for translating a single level criterion to a set of numerical guidelines. The criterion for core melt probability is used to set numerical guidelines for various core melt sequences, systems and component unavailabilities. These guidelines can be used as a means for making decisions regarding the necessity for replacing a component or improving part of a safety system. This approach is applied to estimate a set of numerical guidelines for various sequences of core melts that are analyzed in Reactor Safety Study for the Peach Bottom Nuclear Power Plant.

  17. Short forms of the Child Perceptions Questionnaire for 11–14-year-old children (CPQ11–14): Development and initial evaluation

    PubMed Central

    Jokovic, Aleksandra; Locker, David; Guyatt, Gordan

    2006-01-01

    Background The Child Perceptions Questionnaire for children aged 11 to 14 years (CPQ11–14) is a 37-item measure of oral-health-related quality of life (OHRQoL) encompassing four domains: oral symptoms, functional limitations, emotional and social well-being. To facilitate its use in clinical settings and population-based health surveys, it was shortened to 16 and 8 items. Item impact and stepwise regression methods were used to produce each version. This paper describes the developmental process, compares the discriminative properties of the resulting four short-forms and evaluates their precision relative to the original CPQ11–14. Methods The item impact method used data from the CPQ11–14 item reduction study to select the questions with the highest impact scores in each domain. The regression method, where the dependent variable was the overall CPQ11–14 score and the independent variables its individual questions, was applied to the data collected in the validity study for the CPQ11–14. The measurement properties (i.e. criterion validity, construct validity, internal consistency reliability and test-retest reliability) of all 4 short-forms were evaluated using the data from the validity and reliability studies for the CPQ11–14. Results All short forms detected substantial variability in children's OHRQoL. The mean scores on the two 16-item questionnaires were almost identical, while on the two 8-item questionnaires they differed by only one score point. The mean scores standardized to 0–100 were higher on the short forms than the original CPQ11–14 (p < 0.001). There were strong significant correlations between all short-form scores and CPQ11–14 scores (0.87–0.98; p < 0.001). Hypotheses concerning construct validity were confirmed: the short-forms' scores were highest in the oro-facial, lower in the orthodontic and lowest in the paediatric dentistry group; all short-form questionnaires were positively correlated with the ratings of oral health and overall well-being, with the correlation coefficient being higher for the latter. The relative validity coefficients were 0.85 to 1.18. Cronbach's alpha and intraclass correlation coefficients ranged 0.71–0.83 and 0.71–0.77, respectively. Conclusion All short forms demonstrated excellent criterion validity and good construct validity. The reliability coefficients exceeded standards for group-level comparisons. However, these are preliminary findings based on the convenience sampling and further testing in replicated studies involving clinical and general samples of children in various settings is necessary to establish measurement sensitivity and discriminative properties of these questionnaires. PMID:16423298

  18. The ratio between corner frequencies of source spectra of P- and S-waves—a new discriminant between earthquakes and quarry blasts

    NASA Astrophysics Data System (ADS)

    Ataeva, G.; Gitterman, Y.; Shapira, A.

    2017-01-01

    This study analyzes and compares the P- and S-wave displacement spectra from local earthquakes and explosions of similar magnitudes. We propose a new approach to discrimination between low-magnitude shallow earthquakes and explosions by using ratios of P- to S-wave corner frequencies as a criterion. We have explored 2430 digital records of the Israeli Seismic Network (ISN) from 456 local events (226 earthquakes, 230 quarry blasts, and a few underwater explosions) of magnitudes Md = 1.4-3.4, which occurred at distances up to 250 km during 2001-2013 years. P-wave and S-wave displacement spectra were computed for all events following Brune's source model of earthquakes (1970, 1971) and applying the distance correction coefficients (Shapira and Hofstetter, Teconophysics 217:217-226, 1993; Ataeva G, Shapira A, Hofstetter A, J Seismol 19:389-401, 2015), The corner frequencies and moment magnitudes were determined using multiple stations for each event, and then the comparative analysis was performed.

  19. Development of failure criterion for Kevlar-epoxy fabric laminates

    NASA Technical Reports Server (NTRS)

    Tennyson, R. C.; Elliott, W. G.

    1984-01-01

    The development of the tensor polynomial failure criterion for composite laminate analysis is discussed. In particular, emphasis is given to the fabrication and testing of Kevlar-49 fabric (Style 285)/Narmco 5208 Epoxy. The quadratic-failure criterion with F(12)=0 provides accurate estimates of failure stresses for the Kevlar/Epoxy investigated. The cubic failure criterion was re-cast into an operationally easier form, providing the engineer with design curves that can be applied to laminates fabricated from unidirectional prepregs. In the form presented no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exists at present to generalize this approach for all undirectional prepregs and its use must be restricted to the generic materials investigated to-date.

  20. Non-classical Signature of Parametric Fluorescence and its Application in Metrology

    NASA Astrophysics Data System (ADS)

    Hamar, M.; Michálek, V.; Pathak, A.

    2014-08-01

    The article provides a short theoretical background of what the non-classical light means. We applied the criterion for the existence of non-classical effects derived by C.T. Lee on parametric fluorescence. The criterion was originally derived for the study of two light beams with one mode per beam. We checked if the criterion is still working for two multimode beams of parametric down-conversion through numerical simulations. The theoretical results were tested by measurement of photon number statistics of twin beams emitted by nonlinear BBO crystal pumped by intense femtoseconds UV pulse. We used ICCD camera as the detector of photons in both beams. It appears that the criterion can be used for the measurement of the quantum efficiencies of the ICCD cameras.

  1. Unified Bohm criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kos, L.; Tskhakaya, D. D.; Jelić, N.

    2015-09-15

    Recent decades have seen research into the conditions necessary for the formation of the monotonic potential shape in the sheath, appearing at the plasma boundaries like walls, in fluid, and kinetic approximations separately. Although either of these approaches yields a formulation commonly known as the much-acclaimed Bohm criterion (BC), the respective results involve essentially different physical quantities that describe the ion gas behavior. In the fluid approach, such a quantity is clearly identified as the ion directional velocity. In the kinetic approach, the ion behavior is formulated via a quantity (the squared inverse velocity averaged by the ion distribution function)more » without any clear physical significance, which is, moreover, impractical. In the present paper, we try to explain this difference by deriving a condition called here the Unified Bohm Criterion, which combines an advanced fluid model with an upgraded explicit kinetic formula in a new form of the BC. By introducing a generalized polytropic coefficient function, the unified BC can be interpreted in a form that holds, irrespective of whether the ions are described kinetically or in the fluid approximation.« less

  2. Technical issues affecting the implementation of US Environmental Protection Agency's proposed fish tissue-based aquatic criterion for selenium.

    PubMed

    Lemly, A Dennis; Skorupa, Joseph P

    2007-10-01

    The US Environmental Protection Agency is developing a national water quality criterion for selenium that is based on concentrations of the element in fish tissue. Although this approach offers advantages over the current water-based regulations, it also presents new challenges with respect to implementation. A comprehensive protocol that answers the "what, where, and when" is essential with the new tissue-based approach in order to ensure proper acquisition of data that apply to the criterion. Dischargers will need to understand selenium transport, cycling, and bioaccumulation in order to effectively monitor for the criterion and, if necessary, develop site-specific standards. This paper discusses 11 key issues that affect the implementation of a tissue-based criterion, ranging from the selection of fish species to the importance of hydrological units in the sampling design. It also outlines a strategy that incorporates both water column and tissue-based approaches. A national generic safety-net water criterion could be combined with a fish tissue-based criterion for site-specific implementation. For the majority of waters nationwide, National Pollution Discharge Elimination System permitting and other activities associated with the Clean Water Act could continue without the increased expense of sampling and interpreting biological materials. Dischargers would do biotic sampling intermittently (not a routine monitoring burden) on fish tissue relative to the fish tissue criterion. Only when the fish tissue criterion is exceeded would a full site-specific analysis including development of intermedia translation factors be necessary.

  3. Estimating the Effect of Changes in Criterion Score Reliability on the Power of the "F" Test of Equality of Means

    ERIC Educational Resources Information Center

    Feldt, Leonard S.

    2011-01-01

    This article presents a simple, computer-assisted method of determining the extent to which increases in reliability increase the power of the "F" test of equality of means. The method uses a derived formula that relates the changes in the reliability coefficient to changes in the noncentrality of the relevant "F" distribution. A readily available…

  4. Validation of the Chinese Version of the Quality of Nursing Work Life Scale

    PubMed Central

    Fu, Xia; Xu, Jiajia; Song, Li; Li, Hua; Wang, Jing; Wu, Xiaohua; Hu, Yani; Wei, Lijun; Gao, Lingling; Wang, Qiyi; Lin, Zhanyi; Huang, Huigen

    2015-01-01

    Quality of Nursing Work Life (QNWL) serves as a predictor of a nurse’s intent to leave and hospital nurse turnover. However, QNWL measurement tools that have been validated for use in China are lacking. The present study evaluated the construct validity of the QNWL scale in China. A cross-sectional study was conducted conveniently from June 2012 to January 2013 at five hospitals in Guangzhou, which employ 1938 nurses. The participants were asked to complete the QNWL scale and the World Health Organization Quality of Life abbreviated version (WHOQOL-BREF). A total of 1922 nurses provided the final data used for analyses. Sixty-five nurses from the first investigated division were re-measured two weeks later to assess the test-retest reliability of the scale. The internal consistency reliability of the QNWL scale was assessed using Cronbach’s α. Test-retest reliability was assessed using the intra-class correlation coefficient (ICC). Criterion-relation validity was assessed using the correlation of the total scores of the QNWL and the WHOQOL-BREF. Construct validity was assessed with the following indices: χ2 statistics and degrees of freedom; relative mean square error of approximation (RMSEA); the Akaike information criterion (AIC); the consistent Akaike information criterion (CAIC); the goodness-of-fit index (GFI); the adjusted goodness of fit index; and the comparative fit index (CFI). The findings demonstrated high internal consistency (Cronbach’s α = 0.912) and test-retest reliability (interclass correlation coefficient = 0.74) for the QNWL scale. The chi-square test (χ2 = 13879.60, df [degree of freedom] = 813 P = 0.0001) was significant. The RMSEA value was 0.091, and AIC = 1806.00, CAIC = 7730.69, CFI = 0.93, and GFI = 0.74. The correlation coefficient between the QNWL total scores and the WHOQOL-BREF total scores was 0.605 (p<0.01). The QNWL scale was reliable and valid in Chinese-speaking nurses and could be used as a clinical and research instrument for measuring work-related factors among nurses in China. PMID:25950838

  5. Extracting the exponential behaviors in the market data

    NASA Astrophysics Data System (ADS)

    Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako

    2007-08-01

    We introduce a mathematical criterion defining the bubbles or the crashes in financial market price fluctuations by considering exponential fitting of the given data. By applying this criterion we can automatically extract the periods in which bubbles and crashes are identified. From stock market data of so-called the Internet bubbles it is found that the characteristic length of bubble period is about 100 days.

  6. Study to determine the criterion validity of the SenseWear Armband as a measure of physical activity in people with rheumatoid arthritis.

    PubMed

    Tierney, Marie; Fraser, Alexander; Purtill, Helen; Kennedy, Norelee

    2013-06-01

    Measuring physical activity in people with rheumatoid arthritis (RA) is of great importance in light of the increased mortality in this population due to cardiovascular disease. Validation of activity monitors in specific populations is recommended to ensure the accuracy of physical activity measurement. Thus, the purpose of this study was to determine the validity of the SenseWear Pro3 Armband (SWA) as a measure of physical activity during activities of daily living (ADL) in people with RA. Fourteen subjects (8 men and 6 women) with a diagnosis of RA were recruited from rheumatology clinics at the Mid-Western Regional Hospitals, Limerick, Ireland. Participants undertook a series of ADL of varying intensities. The SWA was compared to the criterion measures of the Oxycon Mobile indirect calorimetry system (energy expenditure in kJ) and of manual video observation (step count). Bland and Altman, intraclass correlation coefficient (ICC), and correlation analyses were done using SPSS, version 19.0. The SWA showed substantial agreement (ICC 0.717, P < 0.001) and a strong relationship (Pearson's correlation coefficient = 0.852) compared with the criterion measure when estimating energy expenditure during ADL. However, it was found that the SWA overestimated energy expenditure, particularly at higher intensity levels. The ability of the SWA to estimate step counts during ADL was poor (ICC 0.304, P = 0.038). The SWA can be considered a valid tool to estimate energy expenditure during ADL in the RA population; however, attention should be paid to its tendency to overestimate energy expenditure. Copyright © 2013 by the American College of Rheumatology.

  7. Moderation analysis using a two-level regression model.

    PubMed

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  8. Automatic discovery of optimal classes

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew

    1986-01-01

    A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.

  9. Rethinking AASM guideline for split-night polysomnography in Asian patients with obstructive sleep apnea.

    PubMed

    Kim, Dong-Kyu; Choi, Jihye; Kim, Kyung Rae; Hwang, Kyung-Gyun; Ryu, Seungho; Cho, Seok Hyun

    2015-12-01

    Split-night polysomnography (SN-PSG) provides both a diagnosis and titration of continuous positive airway pressure over a single night in patients with suspected obstructive sleep apnea (OSA). However, in Asian patients, the diagnostic validity of American Academy of Sleep Medicine (AASM) guidelines for SN-PSG remains uncertain. Therefore, we examined whether the current criteria for SN-PSG are pertinent for Asian patients. We investigated 134 consecutive patients who were diagnosed with OSA (apnea-hypopnea index (AHI) ≥ 5). We divided the raw data (full-night study) into two parts and compared the first 2 h of sleep with the full night of sleep to evaluate the diagnostic precision and accuracy of the first 2 h of sleep. No difference in AHI was observed between the first 2 h and the full night of sleep. A significant correlation of AHI was observed between the first 2 h and the full night of sleep for severe OSA patients (AHI ≥ 30). A correlation coefficient of AHI was higher by the criterion of AHI ≥ 30 than by the criterion of AHI ≥ 40 (r = 0.831 and r = 0.778, respectively), which is the current AASM criterion for SN-PSG. Moreover, the criterion AHI ≥ 30 showed better diagnostic accuracy than the criterion AHI ≥ 40 (89.3 and 88.7 %, respectively). This study found possible evidence supporting different diagnostic criteria for SN-PSG in Asian population. We suggest further studies in other Asian populations to confirm these findings.

  10. Analyses of S-Box in Image Encryption Applications Based on Fuzzy Decision Making Criterion

    NASA Astrophysics Data System (ADS)

    Rehman, Inayatur; Shah, Tariq; Hussain, Iqtadar

    2014-06-01

    In this manuscript, we put forward a standard based on fuzzy decision making criterion to examine the current substitution boxes and study their strengths and weaknesses in order to decide their appropriateness in image encryption applications. The proposed standard utilizes the results of correlation analysis, entropy analysis, contrast analysis, homogeneity analysis, energy analysis, and mean of absolute deviation analysis. These analyses are applied to well-known substitution boxes. The outcome of these analyses are additional observed and a fuzzy soft set decision making criterion is used to decide the suitability of an S-box to image encryption applications.

  11. Controls of channel morphology and sediment concentration on flow resistance in a large sand-bed river: A case study of the lower Yellow River

    NASA Astrophysics Data System (ADS)

    Ma, Yuanxu; Huang, He Qing

    2016-07-01

    Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.

  12. Reliability and Validity of the Musculoskeletal Tumor Society Scoring System for the Upper Extremity in Japanese Patients.

    PubMed

    Uehara, Kosuke; Ogura, Koichi; Akiyama, Toru; Shinoda, Yusuke; Iwata, Shintaro; Kobayashi, Eisuke; Tanzawa, Yoshikazu; Yonemoto, Tsukasa; Kawano, Hirotaka; Kawai, Akira

    2017-09-01

    The Musculoskeletal Tumor Society (MSTS) scoring system developed in 1993 is a widely used disease-specific evaluation tool for assessment of physical function in patients with musculoskeletal tumors; however, only a few studies have confirmed its reliability and validity. The aim of this study was to validate the MSTS scoring system for the upper extremity (MSTS-UE) in Japanese patients with musculoskeletal tumors for use by others in research. Does the MSTS-UE have: (1) sufficient reliability and internal consistency; (2) adequate construct validity; and (3) reasonable criterion validity in comparison to the Toronto Extremity Salvage Score (TESS) or SF-36? Reliability was performed using test-retest analysis, and internal consistency was evaluated with Cronbach's alpha coefficient. Construct validity was evaluated using a scree plot to confirm the construct number and the Akaike information criterion network. Criterion validity was evaluated by comparing the MSTS-UE with the TESS and SF-36. The test-retest reliability with intraclass correlation coefficient (0.95; 95% CI, 0.91-0.97) was excellent, and internal consistency with Cronbach's α (0.7; 95% CI, 0.53-0.81) was acceptable. There were no ceiling and floor effects. The Akaike Information Criterion network showed that lifting ability, pain, and dexterity played central roles among the components. The MSTS-UE showed substantial correlation with the TESS scoring scale (r = 0.75; p < 0.001) and fair correlation with the SF-36 physical component summary (r = 0.37; p = 0.007). Although the MSTS-UE showed slight correlation with the SF-36 mental component summary, the emotional acceptance component of the MSTS-UE showed fair correlation (r = 0.29; p = 0.039). We can conclude that the MSTS is not an adequate measure of general health-related quality of life; however, this system was designed mainly to be a simple measure of function in a single extremity. To evaluate the mental state of patients with musculoskeletal tumors in the upper extremity, further study is needed.

  13. Thermoelectric energy converters under a trade-off figure of merit with broken time-reversal symmetry

    NASA Astrophysics Data System (ADS)

    Iyyappan, I.; Ponmurugan, M.

    2017-09-01

    We study the performance of a three-terminal thermoelectric device such as heat engine and refrigerator with broken time-reversal symmetry by applying the unified trade-off figure of merit (\\dotΩ criterion) which accounts for both useful energy and losses. For the heat engine, we find that a thermoelectric device working under the maximum \\dotΩ criterion gives a significantly better performance than a device working at maximum power output. Within the framework of linear irreversible thermodynamics such a direct comparison is not possible for refrigerators, however, our study indicates that, for refrigerator, the maximum cooling load gives a better performance than the maximum \\dotΩ criterion for a larger asymmetry. Our results can be useful to choose a suitable optimization criterion for operating a real thermoelectric device with broken time-reversal symmetry.

  14. Determination Of Slitting Criterion Parameter During The Multi Slit Rolling Process

    NASA Astrophysics Data System (ADS)

    Stefanik, Andrzej; Mróz, Sebastian; Szota, Piotr; Dyja, Henryk

    2007-05-01

    The rolling of rods with slitting of the strip calls for the use of special mathematical models that would allow for the separating of metal. A theoretical analysis of the effect of the gap of slitting rollers on the process of band slitting during the rolling of 20 mm and 16 mm-diameter ribbed rods rolled according to the two-strand technology was carried out within this study. For the numerical modeling of strip slitting the Forge3® computer program was applied. The strip slitting in the simulation is implemented by the algorithm of removing elements in which the critical value of the normalized Cockroft - Latham criterion has been exceeded. To determine the value of the criterion the inverse method was applied. Distance between a point, where crack begins, and point of contact metal with the slitting rollers was the parameter for analysis. Power and rolling torque during slit rolling were presented. Distribution and change of the stress in strand while slitting were presented.

  15. Failure Criteria for FRP Laminates in Plane Stress

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.

    2003-01-01

    A new set of six failure criteria for fiber reinforced polymer laminates is described. Derived from Dvorak's fracture mechanics analyses of cracked plies and from Puck's action plane concept, the physically-based criteria, denoted LaRC03, predict matrix and fiber failure accurately without requiring curve-fitting parameters. For matrix failure under transverse compression, the fracture plane is calculated by maximizing the Mohr-Coulomb effective stresses. A criterion for fiber kinking is obtained by calculating the fiber misalignment under load, and applying the matrix failure criterion in the coordinate frame of the misalignment. Fracture mechanics models of matrix cracks are used to develop a criterion for matrix in tension and to calculate the associated in-situ strengths. The LaRC03 criteria are applied to a few examples to predict failure load envelopes and to predict the failure mode for each region of the envelope. The analysis results are compared to the predictions using other available failure criteria and with experimental results. Predictions obtained with LaRC03 correlate well with the experimental results.

  16. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    PubMed

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  17. Using the Existential Criterion for Assessing the Personality of Overprotective and Overly Demanding Parents in the Families of Patients Who Have Sought Psychological Counseling for Parent-Child Problems

    ERIC Educational Resources Information Center

    Kapustin, S. A.

    2016-01-01

    The article presents the results of applying the existential criterion of normal and abnormal personalities for assessing the personality of overprotective and overly demanding parents in 176 families of patients who have sought psychological counseling. It is shown that the position of overprotective parents is one-sided in relation to the…

  18. The transformation of the tender evaluation process in public procurement in Poland

    NASA Astrophysics Data System (ADS)

    Plebankiewicz, E.; Kozik, R.

    2017-10-01

    Procedures regarding the evaluation of tenders have been changed since the public procurement law was enacted (it came into force in January 1, 1995). The contracting authority could apply both the criteria related to the qualities of the contractor and those related to the to the subject - matter of public contract. Two extensive amendments in 2001 and a government project introduced vital regulations and excluded the possibility of applying criteria related to the qualities of the contractor. Act of 29 January 2004 Public Procurement Law allowed to use price as the sole contract award criterion. The changes in the Law in 2014 restricted that possibility to the situation in which the subject matter of a contract is commonly available and has established quality standards. The Act of 22 June 2016 amending the Public Procurement Law Act and some other laws introduced the new criteria list and limited the importance of the price criterion in the certain situations. Instead of price, the cost can also be a criterion for tender evaluation. The cost criterion can be determined using life cycle costing. In the paper, based on contract notices of open tendering published in the Public Procurement Bulletin, the criteria of construction contract selection will be analysed. In particular the effectiveness of changes in the Procurement Law will be researched.

  19. Selection of effective cocrystals former for dissolution rate improvement of active pharmaceutical ingredients based on lipoaffinity index.

    PubMed

    Cysewski, Piotr; Przybyłek, Maciej

    2017-09-30

    New theoretical screening procedure was proposed for appropriate selection of potential cocrystal formers possessing the ability of enhancing dissolution rates of drugs. The procedure relies on the training set comprising 102 positive and 17 negative cases of cocrystals found in the literature. Despite the fact that the only available data were of qualitative character, performed statistical analysis using binary classification allowed to formulate quantitative criterions. Among considered 3679 molecular descriptors the relative value of lipoaffinity index, expressed as the difference between values calculated for active compound and excipient, has been found as the most appropriate measure suited for discrimination of positive and negative cases. Assuming 5% precision, the applied classification criterion led to inclusion of 70% positive cases in the final prediction. Since lipoaffinity index is a molecular descriptor computed using only 2D information about a chemical structure, its estimation is straightforward and computationally inexpensive. The inclusion of an additional criterion quantifying the cocrystallization probability leads to the following conjunction criterions H mix <-0.18 and ΔLA>3.61, allowing for identification of dissolution rate enhancers. The screening procedure was applied for finding the most promising coformers of such drugs as Iloperidone, Ritonavir, Carbamazepine and Enthenzamide. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Variation among conventional cultivars could be used as a criterion for environmental safety assessment of Bt rice on nontarget arthropods

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Dang, Cong; Chang, Xuefei; Tian, Junce; Lu, Zengbin; Chen, Yang; Ye, Gongyin

    2017-02-01

    The current difficulty facing risk evaluations of Bacillus thuringiensis (Bt) crops on nontarget arthropods (NTAs) is the lack of criteria for determining what represents unacceptable risk. In this study, we investigated the biological parameters in the laboratory and field population abundance of Nilaparvata lugens (Hemiptera: Delphacidae) on two Bt rice lines and the non-Bt parent, together with 14 other conventional rice cultivars. Significant difference were found in nymphal duration and fecundity of N. lugens fed on Bt rice KMD2, as well as field population density on 12 October, compared with non-Bt parent. However, compared with the variation among conventional rice cultivars, the variation of each parameter between Bt rice and the non-Bt parent was much smaller, which can be easily seen from low-high bar graphs and also the coefficient of variation value (C.V). The variation among conventional cultivars is proposed to be used as a criterion for the safety assessment of Bt rice on NTAs, particularly when statistically significant differences in several parameters are found between Bt rice and its non-Bt parent. Coefficient of variation is suggested as a promising parameter for ecological risk judgement of IRGM rice on NTAs.

  1. Volatilization of organic compounds from streams

    USGS Publications Warehouse

    Rathburn, R.E.; Tai, D.Y.

    1982-01-01

    Mass-transfer coefficients for the volatilization of ethylene and propane were correlated with the hydraulic and geometric properties of seven streams, and predictive equations were developed. The equations were evaluated using a normalized root-mean-square error as the criterion of comparison. The two best equations were a two-variable equation containing the energy dissipated per unit mass per unit time and the average depth of flow and a three-variable equation containing the average velocity, the average depth of flow, and the slope of the stream. Procedures for adjusting the ethylene and propane coefficients for other organic compounds were evaluated. These procedures are based on molecular diffusivity, molecular diameter, or molecular weight. Because of limited data, none of these procedures have been extensively verified. Therefore, until additional data become available, it is suggested that the mass-transfer coefficient be assumed to be inversely proportional to the square root of the molecular weight.

  2. Noise and sleep - A literature review and a proposed criterion for assessing effect

    NASA Technical Reports Server (NTRS)

    Lukas, J. S.

    1975-01-01

    Results of a number of studies on the effects of various types of noise on the sleep of subjects of both sexes and a wide range of age groups are reviewed to develop a tentative criterion for assessing these effects. Available data suggest that reasonably accurate predictions of sleep disruption may be made if the interfering noise is specified in units (EPNdB or EdBA) which account for its spectral characteristics and duration. When EPNdB units are used as the measure of noise intensity, the correlation coefficient between intensity and the probability of no sleep disturbance is -0.86. Because of the paucity of data on the long-term results of frequent behavioral wakings or arousals, it is suggested that disturbance of sleep be defined as an electroencephalographic change of one or more sleep stages.

  3. An application of model-fitting procedures for marginal structural models.

    PubMed

    Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B

    2005-08-15

    Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.

  4. Evaluation of failure criterion for graphite/epoxy fabric laminates

    NASA Technical Reports Server (NTRS)

    Tennyson, R. C.; Wharram, G. E.

    1985-01-01

    The development and application of the tensor polynomial failure criterion for composite laminate analysis is described. Emphasis is given to the fabrication and testing of Narmco Rigidite 5208-WT300, a plain weave fabric of Thornel 300 Graphite fibers impregnated with Narmco 5208 Resin. The quadratic-failure criterion with F sub 12=0 provides accurate estimates of failure stresses for the graphite/epoxy investigated. The cubic failure criterion was recast into an operationally easier form, providing design curves that can be applied to laminates fabricated from orthotropic woven fabric prepregs. In the form presented, no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exist at present to generalize this approach for all prepreg constructions, and its use must be restricted to the generic materials and configurations investigated to date.

  5. Muscle Fiber Orientation Angle Dependence of the Tensile Fracture Behavior of Frozen Fish Muscle

    NASA Astrophysics Data System (ADS)

    Hagura, Yoshio; Okamoto, Kiyoshi; Suzuki, Kanichi; Kubota, Kiyoshi

    We have proposed a new cutting method for frozen fish named "cryo-cutting". This method applied tensile fracture force or bending fracture force to the frozen fish at appropriate low temperatures. In this paper, to clarify cryo-cutting mechanism, we analyzed tensile fracture behavior of the frozen fish muscle. In the analysis, the frozen fish muscle was considered unidirectionally fiber-reinforced composite material which consisted of fiber (muscle fiber) and matrix (connective tissue). Fracture criteria (maximum stress criterion, Tsai-Hill criterion) for the unidirectionally fiber-reinforced composite material were used. The following results were obtained: (1) By using Tsai-Hill criterion, muscle fiber orientation angle dependence of the tensile fracture stress could be calculated. (2) By using the maximum stress theory jointly with Tsai-Hill criterion, muscle fiber orientation angle dependence of the fracture mode of the frozen fish muscle could be estimated.

  6. Convective instabilities in SN 1987A

    NASA Technical Reports Server (NTRS)

    Benz, Willy; Thielemann, Friedrich-Karl

    1990-01-01

    Following Bandiera (1984), it is shown that the relevant criterion to determine the stability of a blast wave, propagating through the layers of a massive star in a supernova explosion, is the Schwarzschild (or Ledoux) criterion rather than the Rayleigh-Taylor criterion. Both criteria coincide only in the incompressible limit. Results of a linear stability analysis are presented for a one-dimensional (spherical) explosion in a realistic model for the progenitor of SN 1987A. When applying the Schwarzschild criterion, unstable regions get extended considerably. Convection is found to develop behind the shock, with a characteristic growth rate corresponding to a time scale much smaller than the shock traversal time. This ensures that efficient mixing will take place. Since the entire ejected mass is found to be convectively unstable, Ni can be transported outward, even into the hydrogen envelope, while hydrogen can be mixed deep into the helium core.

  7. Does optimal partitioning of color space account for universal color categorization?

    PubMed Central

    2017-01-01

    A 2007 study by Regier, Kay, and Khetarpal purports to show that universal categories emerge as a result of optimal partitioning of color space. Regier, Kay, and Khetarpal only consider color categorizations of up to six categories. However, in most industrialized societies eleven color categories are observed. This paper shows that when applied to the case of eleven categories, Regier, Kay, and Khetarpal’s optimality criterion yields unsatisfactory results. Applications of the criterion to the intermediate cases of seven, eight, nine, and ten color categories are also briefly considered and are shown to yield mixed results. We consider a number of possible explanations of the failure of the criterion in the case of eleven categories, and suggest that, as color categorizations get more complex, further criteria come to play a role, alongside Regier, Kay, and Khetarpal’s optimality criterion. PMID:28570598

  8. Nurses' Empowerment Scale for ICU patients' families: an instrument development study.

    PubMed

    Li, Hong; Liu, Ya-Lan; Qiu, Li; Chen, Qiao-Ling; Wu, Jing-Bing; Chen, Li-Li; Li, Na

    2016-09-01

    Family members provide essential support for ICU patients, contributing to their mental and physical recovery. Empowering ICU patients' families may help them overcome inadequacies and meet their own and patients' acknowledged needs. Nursing should understand and address patients' families' empowerment status. To develop a tool, the Nurses' Empowerment Scale for Intensive Care Unit (ICU) Patients' Families (NESIPF), to help ICU nursing staff assess the empowerment status of patients' families. Four-phase instrument development study. A 19-item instrument was initially generated based on literature review and interviews with family members of ICU patients. The Delphi research method was applied to gain expert opinion and consensus via rounds of questionnaires. A panel of 27 experts experienced in critical care medicine, nursing and psychology participated in two Delphi rounds and their input helped formulate an 18-item pretest instrument. Families of 20 patients were recruited to examine instrument readability. After a 2-week interval, another 20 patients' families were recruited to examine test-retest reliability. Two hundred questionnaires were then administered and analysed to examine the instrument's construct validity, criterion-related validity and internal consistency. Expert authority coefficients of two Delphi rounds reached 0·89 and 0·91. Kendall' W coefficients of 0·113 (P < 0·001) in round 1 and 0·220 (P < 0·001) in round 2 indicated slight to fair agreement among experts. Content validity index (CVI) reached 1·0 for 12 items; the CVI for item 13 was <0·7 so it was excluded. Cronbach's α coefficient was 0·92, indicating acceptable internal consistency reliability. The coefficient of internal consistency of each dimension was 0·717-0·921. The Pearson correlation coefficient >0·9 (P < 0·05) showed an acceptable test-retest reliability. The instrument has acceptable reliability and validity and can assess the empowerment status of families of critically ill patients. Knowledge of families' empowerment status may help to address their psychological needs and their ability to provide family support. © 2014 British Association of Critical Care Nurses.

  9. Dynamics of an HBV/HCV infection model with intracellular delay and cell proliferation

    NASA Astrophysics Data System (ADS)

    Zhang, Fengqin; Li, Jianquan; Zheng, Chongwu; Wang, Lin

    2017-01-01

    A new mathematical model of hepatitis B/C virus (HBV/HCV) infection which incorporates the proliferation of healthy hepatocyte cells and the latent period of infected hepatocyte cells is proposed and studied. The dynamics is analyzed via Pontryagin's method and a newly proposed alternative geometric stability switch criterion. Sharp conditions ensuring stability of the infection persistent equilibrium are derived by applying Pontryagin's method. Using the intracellular delay as the bifurcation parameter and applying an alternative geometric stability switch criterion, we show that the HBV/HCV infection model undergoes stability switches. Furthermore, numerical simulations illustrate that the intracellular delay can induce complex dynamics such as persistence bubbles and chaos.

  10. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    PubMed

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  11. Optimal low symmetric dissipation Carnot engines and refrigerators

    NASA Astrophysics Data System (ADS)

    de Tomás, C.; Hernández, A. Calvo; Roco, J. M. M.

    2012-01-01

    A unified optimization criterion for Carnot engines and refrigerators is proposed. It consists of maximizing the product of the heat absorbed by the working system times the efficiency per unit time of the device, either the engine or the refrigerator. This criterion can be applied to both low symmetric dissipation Carnot engines and refrigerators. For engines the criterion coincides with the maximum power criterion and then the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered, where Th and Tc are the temperatures of the hot and cold reservoirs, respectively [Esposito, Kawai, Lindenberg, and Van den Broeck, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.105.150603 105, 150603 (2010)]. For refrigerators the criterion provides the counterpart of Curzon-Ahlborn efficiency for refrigerators ɛCA=[1/(1-(Tc/Th)]-1, first derived by Yan and Chen for the particular case of an endoreversible Carnot-type refrigerator with linear (Newtonian) finite heat transfer laws [Yan and Chen, J. Phys. D: Appl. Phys.JPAPBE0022-372710.1088/0022-3727/23/2/002 23, 136 (1990)].

  12. Creation and Initial Validation of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale

    PubMed Central

    Steele, Catriona M.; Namasivayam-MacDonald, Ashwini M.; Guida, Brittany T.; Cichero, Julie A.; Duivestein, Janice; MRSc; Hanson, Ben; Lam, Peter; Riquelme, Luis F.

    2018-01-01

    Objective To assess consensual validity, interrater reliability, and criterion validity of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale, a new functional outcome scale intended to capture the severity of oropharyngeal dysphagia, as represented by the degree of diet texture restriction recommended for the patient. Design Participants assigned International Dysphagia Diet Standardisation Initiative Functional Diet Scale scores to 16 clinical cases. Consensual validity was measured against reference scores determined by an author reference panel. Interrater reliability was measured overall and across quartile subsets of the dataset. Criterion validity was evaluated versus Functional Oral Intake Scale (FOIS) scores assigned by survey respondents to the same case scenarios. Feedback was requested regarding ease and likelihood of use. Setting Web-based survey. Participants Respondents (NZ170) from 29 countries. Interventions Not applicable. Main Outcome Measures Consensual validity (percent agreement and Kendall t), criterion validity (Spearman rank correlation), and interrater reliability (Kendall concordance and intraclass coefficients). Results The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed strong consensual validity, criterion validity, and interrater reliability. Scenarios involving liquid-only diets, transition from nonoral feeding, or trial diet advances in therapy showed the poorest consensus, indicating a need for clear instructions on how to score these situations. The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed greater sensitivity than the FOIS to specific changes in diet. Most (>70%) respondents indicated enthusiasm for implementing the International Dysphagia Diet Standardisation Initiative Functional Diet Scale. Conclusions This initial validation study suggests that the International Dysphagia Diet Standardisation Initiative Functional Diet Scale has strong consensual and criterion validity and can be used reliably by clinicians to capture diet texture restriction and progression in people with dysphagia. PMID:29428348

  13. Creation and Initial Validation of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale.

    PubMed

    Steele, Catriona M; Namasivayam-MacDonald, Ashwini M; Guida, Brittany T; Cichero, Julie A; Duivestein, Janice; Hanson, Ben; Lam, Peter; Riquelme, Luis F

    2018-05-01

    To assess consensual validity, interrater reliability, and criterion validity of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale, a new functional outcome scale intended to capture the severity of oropharyngeal dysphagia, as represented by the degree of diet texture restriction recommended for the patient. Participants assigned International Dysphagia Diet Standardisation Initiative Functional Diet Scale scores to 16 clinical cases. Consensual validity was measured against reference scores determined by an author reference panel. Interrater reliability was measured overall and across quartile subsets of the dataset. Criterion validity was evaluated versus Functional Oral Intake Scale (FOIS) scores assigned by survey respondents to the same case scenarios. Feedback was requested regarding ease and likelihood of use. Web-based survey. Respondents (N=170) from 29 countries. Not applicable. Consensual validity (percent agreement and Kendall τ), criterion validity (Spearman rank correlation), and interrater reliability (Kendall concordance and intraclass coefficients). The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed strong consensual validity, criterion validity, and interrater reliability. Scenarios involving liquid-only diets, transition from nonoral feeding, or trial diet advances in therapy showed the poorest consensus, indicating a need for clear instructions on how to score these situations. The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed greater sensitivity than the FOIS to specific changes in diet. Most (>70%) respondents indicated enthusiasm for implementing the International Dysphagia Diet Standardisation Initiative Functional Diet Scale. This initial validation study suggests that the International Dysphagia Diet Standardisation Initiative Functional Diet Scale has strong consensual and criterion validity and can be used reliably by clinicians to capture diet texture restriction and progression in people with dysphagia. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. Adequacy of the Ultra-Short-Term HRV to Assess Adaptive Processes in Youth Female Basketball Players.

    PubMed

    Nakamura, Fabio Y; Pereira, Lucas A; Cal Abad, Cesar C; Cruz, Igor F; Flatt, Andrew A; Esco, Michael R; Loturco, Irineu

    2017-02-01

    Heart rate variability has been widely used to monitor athletes' cardiac autonomic control changes induced by training and competition, and recently shorter recording times have been sought to improve its practicality. The aim of this study was to test the agreement between the (ultra-short-term) natural log of the root-mean-square difference of successive normal RR intervals (lnRMSSD - measured in only 1 min post-1 min stabilization) and the criterion lnRMSSD (measured in the last 5 min out of 10 min of recording) in young female basketball players. Furthermore, the correlation between training induced delta change in the ultra-short-term lnRMSSD and the criterion lnRMSSD was calculated. Seventeen players were assessed at rest pre- and post-eight weeks of training. Trivial effect sizes (-0.03 in the pre- and 0.10 in the post- treatment) were found in the comparison between the ultra-short-term lnRMSSD (3.29 ± 0.45 and 3.49 ± 0.35 ms, in the pre- and post-, respectively) and the criterion lnRMSSD (3.30 ± 0.40 and 3.45 ± 0.41 ms, in the pre- and post-, respectively) (intraclass correlation coefficient = 0.95 and 0.93). In both cases, the response to training was significant, with Pearson's correlation of 0.82 between the delta changes of the ultra-short-term lnRMSSD and the criterion lnRMSSD. In conclusion, the lnRMSSD can be calculated within only 2 min of data acquisition (the 1 st min discarded) in young female basketball players, with the ultra-short-term measure presenting similar sensitivity to training effects as the standard criterion measure.

  15. Adequacy of the Ultra-Short-Term HRV to Assess Adaptive Processes in Youth Female Basketball Players

    PubMed Central

    Nakamura, Fabio Y; Pereira, Lucas A; Cal Abad, Cesar C; Cruz, Igor F; Flatt, Andrew A; Esco, Michael R; Loturco, Irineu

    2017-01-01

    Abstract Heart rate variability has been widely used to monitor athletes’ cardiac autonomic control changes induced by training and competition, and recently shorter recording times have been sought to improve its practicality. The aim of this study was to test the agreement between the (ultra-short-term) natural log of the root-mean-square difference of successive normal RR intervals (lnRMSSD - measured in only 1 min post-1 min stabilization) and the criterion lnRMSSD (measured in the last 5 min out of 10 min of recording) in young female basketball players. Furthermore, the correlation between training induced delta change in the ultra-short-term lnRMSSD and the criterion lnRMSSD was calculated. Seventeen players were assessed at rest pre- and post-eight weeks of training. Trivial effect sizes (-0.03 in the pre- and 0.10 in the post- treatment) were found in the comparison between the ultra-short-term lnRMSSD (3.29 ± 0.45 and 3.49 ± 0.35 ms, in the pre- and post-, respectively) and the criterion lnRMSSD (3.30 ± 0.40 and 3.45 ± 0.41 ms, in the pre- and post-, respectively) (intraclass correlation coefficient = 0.95 and 0.93). In both cases, the response to training was significant, with Pearson’s correlation of 0.82 between the delta changes of the ultra-short-term lnRMSSD and the criterion lnRMSSD. In conclusion, the lnRMSSD can be calculated within only 2 min of data acquisition (the 1st min discarded) in young female basketball players, with the ultra-short-term measure presenting similar sensitivity to training effects as the standard criterion measure. PMID:28469745

  16. The research of Raman spectra measurement system based on tiled-grating monochromator

    NASA Astrophysics Data System (ADS)

    Liu, Li-na; Zhang, Yin-chao; Chen, Si-ying; Chen, He; Guo, Pan; Wang, Yuan

    2013-09-01

    A set of Raman spectrum measurement system, essentially a Raman spectrometer, has been independently designed and accomplished by our research group. This system adopts tiled-grating structure, namely two 50mm × 50mm holographic gratings are tiled to form a big spectral grating. It not only improves the resolution but also reduces the cost. This article outlines the Raman spectroscopy system's composition structure and performance parameters. Then corresponding resolutions of the instrument under different criterions are deduced through experiments and data fitting. The result shows that the system's minimum resolution is up to 0.02nm, equivalent to 0.5cm-1 wavenumber under Rayleigh criterion; and it will be up to 0.007nm, equivalent to 0.19cm-1 wavenumber under Sparrow criterion. Then Raman spectra of CCl4 and alcohol have been obtained by the spectrometer, which agreed with the standard spectrum respectively very well. Finally, we measured the spectra of the alcohol solutions with different concentrations and extracted the intensity of characteristic peaks from smoothed spectra. Linear fitting between intensity of characteristic peaks and alcohol solution concentrations has been made. And the linear correlation coefficient is 0.96.

  17. [Criterion and Construct Validity in Nursing Diagnosis "Sedentary Lifestyle" in People over 50 Years Old].

    PubMed

    Guirao-Goris, Silamani J; Ferrer Ferrandis, Esperanza; Montejano Lozoya, Raimunda

    2016-02-18

    The aim of the study is to identify the construct and criterion validity of the nursing diagnosis label Sedentary Lifestyle. A cross-sectional study in a nursing consultation in primary health care was conducted. Participants were all people that was attended for one year over 50 who voluntarily wish to participate (n=85) in the study. Objective weekly physical activity was measured in METs with an Accelerometer, objective measure of performance was measured by gait speed EPESE Battery (both measures that were used as the gold standard), and physical activity questionnaires (RAPA), the COOP-WONCA physical fitness chart. Spearman correlation coefficients, mean comparison tests and analysis of sensitivity and specificity were used as statistical analysis. The diagnosis "Sedentary Lifestyle" showed a positive correlation between its manifestations and physical activity measured in METs (r=0.39) and EPESE gait speed (r=0.35). The diagnosis showed a sensitivity of 85.1% and a specificity of 65.2% and showed ability to discriminate active people from those that are not using METs as a measure of physical activity (t=-4.4). The diagnosis "Sedentary Lifestyle" shows criterion and construct validity.

  18. Effects of the injected plasma on the breakdown process of the trigatron gas switch under low working coefficient

    NASA Astrophysics Data System (ADS)

    Chen, Li; Yang, Lanjun; Qiu, Aici; Huang, Dong; Liu, Shuai

    2018-01-01

    Based on the surface flashover discharge, the injected plasma was generated, and the effects on the breakdown process of the trigatron gas switch were studied in this paper. The breakdown model caused by the injected plasma under the low working coefficient (<0.7) was established. The captured framing images showed that the injected plasma distorted the electrical field of the gap between the frontier of the injected plasma and the opposite electrode, making it easier to achieve the breakdown critical criterion. The calculation results indicated that the breakdown delay time was mainly decided by the development of the injected plasma, as without considering the effects of the photo-ionization and the invisible expansion process, the breakdown delay time of the calculation results was 20% higher than the experimental results. The morphology of the injected plasma generated by polyethylene surface flashover was more stable and regular than ceramic, leading to a 30% lower breakdown delay time when the working coefficient is larger than 0.2, and the difference increased sharply when the working coefficient is lower than 0.2. This was significant for improving the trigger performance of the trigatron gas switch under low working coefficient.

  19. Comparison of case note review methods for evaluating quality and safety in health care.

    PubMed

    Hutchinson, A; Coster, J E; Cooper, K L; McIntosh, A; Walters, S J; Bath, P A; Pearson, M; Young, T A; Rantell, K; Campbell, M J; Ratcliffe, J

    2010-02-01

    To determine which of two methods of case note review--holistic (implicit) and criterion-based (explicit)--provides the most useful and reliable information for quality and safety of care, and the level of agreement within and between groups of health-care professionals when they use the two methods to review the same record. To explore the process-outcome relationship between holistic and criterion-based quality-of-care measures and hospital-level outcome indicators. Case notes of patients at randomly selected hospitals in England. In the first part of the study, retrospective multiple reviews of 684 case notes were undertaken at nine acute hospitals using both holistic and criterion-based review methods. Quality-of-care measures included evidence-based review criteria and a quality-of-care rating scale. Textual commentary on the quality of care was provided as a component of holistic review. Review teams comprised combinations of: doctors (n = 16), specialist nurses (n = 10) and clinically trained audit staff (n = 3) and non-clinical audit staff (n = 9). In the second part of the study, process (quality and safety) of care data were collected from the case notes of 1565 people with either chronic obstructive pulmonary disease (COPD) or heart failure in 20 hospitals. Doctors collected criterion-based data from case notes and used implicit review methods to derive textual comments on the quality of care provided and score the care overall. Data were analysed for intrarater consistency, inter-rater reliability between pairs of staff using intraclass correlation coefficients (ICCs) and completeness of criterion data capture, and comparisons were made within and between staff groups and between review methods. To explore the process-outcome relationship, a range of publicly available health-care indicator data were used as proxy outcomes in a multilevel analysis. Overall, 1473 holistic and 1389 criterion-based reviews were undertaken in the first part of the study. When same staff-type reviewer pairs/groups reviewed the same record, holistic scale score inter-rater reliability was moderate within each of the three staff groups [intraclass correlation coefficient (ICC) 0.46-0.52], and inter-rater reliability for criterion-based scores was moderate to good (ICC 0.61-0.88). When different staff-type pairs/groups reviewed the same record, agreement between the reviewer pairs/groups was weak to moderate for overall care (ICC 0.24-0.43). Comparison of holistic review score and criterion-based score of case notes reviewed by doctors and by non-clinical audit staff showed a reasonable level of agreement (p-values for difference 0.406 and 0.223, respectively), although results from all three staff types showed no overall level of agreement (p-value for difference 0.057). Detailed qualitative analysis of the textual data indicated that the three staff types tended to provide different forms of commentary on quality of care, although there was some overlap between some groups. In the process-outcome study there generally were high criterion-based scores for all hospitals, whereas there was more interhospital variation between the holistic review overall scale scores. Textual commentary on the quality of care verified the holistic scale scores. Differences among hospitals with regard to the relationship between mortality and quality of care were not statistically significant. Using the holistic approach, the three groups of staff appeared to interpret the recorded care differently when they each reviewed the same record. When the same clinical record was reviewed by doctors and non-clinical audit staff, there was no significant difference between the assessments of quality of care generated by the two groups. All three staff groups performed reasonably well when using criterion-based review, although the quality and type of information provided by doctors was of greater value. Therefore, when measuring quality of care from case notes, consideration needs to be given to the method of review, the type of staff undertaking the review, and the methods of analysis available to the review team. Review can be enhanced using a combination of both criterion-based and structured holistic methods with textual commentary, and variation in quality of care can best be identified from a combination of holistic scale scores and textual data review.

  20. Calculation method for steady-state pollutant concentration in mixing zones considering variable lateral diffusion coefficient.

    PubMed

    Wu, Wen; Wu, Zhouhu; Song, Zhiwen

    2017-07-01

    Prediction of the pollutant mixing zone (PMZ) near the discharge outfall in Huangshaxi shows large error when using the methods based on the constant lateral diffusion assumption. The discrepancy is due to the lack of consideration of the diffusion coefficient variation. The variable lateral diffusion coefficient is proposed to be a function of the longitudinal distance from the outfall. Analytical solution of the two-dimensional advection-diffusion equation of a pollutant is derived and discussed. Formulas to characterize the geometry of the PMZ are derived based on this solution, and a standard curve describing the boundary of the PMZ is obtained by proper choices of the normalization scales. The change of PMZ topology due to the variable diffusion coefficient is then discussed using these formulas. The criterion of assuming the lateral diffusion coefficient to be constant without large error in PMZ geometry is found. It is also demonstrated how to use these analytical formulas in the inverse problems including estimating the lateral diffusion coefficient in rivers by convenient measurements, and determining the maximum allowable discharge load based on the limitations of the geometrical scales of the PMZ. Finally, applications of the obtained formulas to onsite PMZ measurements in Huangshaxi present excellent agreement.

  1. A new tracer‐density criterion for heterogeneous porous media

    USGS Publications Warehouse

    Barth, Gilbert R.; Illangasekare, Tissa H.; Hill, Mary C.; Rajaram, Harihar

    2001-01-01

    Tracer experiments provide information about aquifer material properties vital for accurate site characterization. Unfortunately, density‐induced sinking can distort tracer movement, leading to an inaccurate assessment of material properties. Yet existing criteria for selecting appropriate tracer concentrations are based on analysis of homogeneous media instead of media with heterogeneities typical of field sites. This work introduces a hydraulic‐gradient correction for heterogeneous media and applies it to a criterion previously used to indicate density‐induced instabilities in homogeneous media. The modified criterion was tested using a series of two‐dimensional heterogeneous intermediate‐scale tracer experiments and data from several detailed field tracer tests. The intermediate‐scale experimental facility (10.0×1.2×0.06 m) included both homogeneous and heterogeneous (σln k2 = 1.22) zones. The field tracer tests were less heterogeneous (0.24 < σln k2 < 0.37), but measurements were sufficient to detect density‐induced sinking. Evaluation of the modified criterion using the experiments and field tests demonstrates that the new criterion appears to account for the change in density‐induced sinking due to heterogeneity. The criterion demonstrates the importance of accounting for heterogeneity to predict density‐induced sinking and differences in the onset of density‐induced sinking in two‐ and three‐dimensional systems.

  2. An analytic expression for the sheath criterion in magnetized plasmas with multi-charged ion species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatami, M. M., E-mail: m-hatami@kntu.ac.ir

    2015-04-15

    The generalized Bohm criterion in magnetized multi-component plasmas consisting of multi-charged positive and negative ion species and electrons is analytically investigated by using the hydrodynamic model. It is assumed that the electrons and negative ion density distributions are the Boltzmann distribution with different temperatures and the positive ions enter into the sheath region obliquely. Our results show that the positive and negative ion temperatures, the orientation of the applied magnetic field and the charge number of positive and negative ions strongly affect the Bohm criterion in these multi-component plasmas. To determine the validity of our derived generalized Bohm criterion, itmore » reduced to some familiar physical condition and it is shown that monotonically reduction of the positive ion density distribution leading to the sheath formation occurs only when entrance velocity of ion into the sheath satisfies the obtained Bohm criterion. Also, as a practical application of the obtained Bohm criterion, effects of the ionic temperature and concentration as well as magnetic field on the behavior of the charged particle density distributions and so the sheath thickness of a magnetized plasma consisting of electrons and singly charged positive and negative ion species are studied numerically.« less

  3. Competency Test Items for Applied Principles of Agribusiness and Natural Resources Occupations. Agricultural Mechanics Component. A Report of Research.

    ERIC Educational Resources Information Center

    Cheek, Jimmy G.; McGhee, Max B.

    An activity was undertaken to develop written criterion-referenced tests for the agricultural mechanics component of the Applied Principles of Agribusiness and Natural Resources. Intended for tenth grade students who have completed Fundamentals of Agribusiness and Natural Resources Occupations, applied principles were designed to consist of three…

  4. Competency Test Items for Applied Principles of Agribusiness and Natural Resources Occupations. Common Core Component. A Report of Research.

    ERIC Educational Resources Information Center

    Cheek, Jimmy G.; McGhee, Max B.

    An activity was undertaken to develop written criterion-referenced tests for the common core component of Applied Principles of Agribusiness and Natural Resources Occupations. Intended for tenth grade students who have completed Fundamentals of Agribusiness and Natural Resources Occupations, applied principles were designed to consist of three…

  5. Competency Test Items for Applied Principles of Agribusiness and Natural Resources Occupations. Forestry Component. A Report of Research.

    ERIC Educational Resources Information Center

    Cheek, Jimmy G.; McGhee, Max B.

    An activity was undertaken to develop written criterion-referenced tests for the forestry component of Applied Principles of Agribusiness and Natural Resources. Intended for tenth grade students who have completed Fundamentals of Agribusiness and Natural Resources Occupations, applied principles were designed to consist of three components, with…

  6. Competency Test Items for Applied Principles of Agribusiness and Natural Resources Occupations. Agricultural Resources Component. A Report of Research.

    ERIC Educational Resources Information Center

    Cheek, Jimmy G.; McGhee, Max B.

    An activity was undertaken to develop written criterion-referenced tests for the agricultural resources component of Applied Principles of Agribusiness and Natural Resources. Intended for tenth grade students who have completed Fundamentals of Agribusiness and Natural Resources Occupations, applied principles were designed to consist of three…

  7. Competency Test Items for Applied Principles of Agribusiness and Natural Resources Occupations. Agricultural Production Component. A Report of Research.

    ERIC Educational Resources Information Center

    Cheek, Jimmy G.; McGhee, Max B.

    An activity was undertaken to develop written criterion-referenced tests for the agricultural production component of Applied Principles of Agribusiness and Natural Resources Occupations. Intended for tenth grade students who have completed Fundamentals of Agribusiness and Natural Resources Occupations, applied principles were designed to consist…

  8. Complex symmetric matrices with strongly stable iterates

    NASA Technical Reports Server (NTRS)

    Tadmor, E.

    1985-01-01

    Complex-valued symmetric matrices are studied. A simple expression for the spectral norm of such matrices is obtained, by utilizing a unitarily congruent invariant form. A sharp criterion is provided for identifying those symmetric matrices whose spectral norm is not exceeding one: such strongly stable matrices are usually sought in connection with convergent difference approximations to partial differential equations. As an example, the derived criterion is applied to conclude the strong stability of a Lax-Wendroff scheme.

  9. Impact of the DSM-V Attention Deficit Hyperactivity Disorder Criteria for Diagnosing Children With High IQ.

    PubMed

    Thongseiratch, Therdpong; Worachotekamjorn, Juthamas

    2016-10-01

    This study compared the number of attention deficit hyperactivity disorder (ADHD) cases defined by Diagnostic and Statistical Manual (DSM)-IV versus DSM-V criterion in children who have learning or behavioral problems with high IQ. The medical records of children ≤15 years of age who presented with learning or behavioral problems and underwent a Wechsler Intelligence Scale for Children (WISC)-III IQ test at the Pediatric Outpatient Clinic unit between 2010 and 2015 were reviewed. Information on DSM-IV and DSM-V criteria for ADHD were derived from computer-based medical records. Twenty-eight children who had learning or behavioral problems were identified to have a full-scale IQ ≥120. Sixteen of these high-IQ children met the DSM-IV criteria diagnosis for ADHD. Applying the extension of the age-of-onset criterion from 7 to 12 years in DSM-V led to an increase of three cases, all of which were the inattentive type ADHD. Including the pervasive developmental disorder criterion led to an increase of one case. The total number of ADHD cases also increased from 16 to 20 in this group. The data supported the hypothesis that applying the extension of the age-of-onset ADHD criterion and enabling the diagnosis of children with pervasive developmental disorders will increase the number of ADHD diagnoses among children with high IQ. © The Author(s) 2016.

  10. Introducing the Professionalism Mini-Evaluation Exercise (P-MEX) in Japan: results from a multicenter, cross-sectional study.

    PubMed

    Tsugawa, Yusuke; Ohbu, Sadayoshi; Cruess, Richard; Cruess, Sylvia; Okubo, Tomoya; Takahashi, Osamu; Tokuda, Yasuharu; Heist, Brian S; Bito, Seiji; Itoh, Toshiyuki; Aoki, Akiko; Chiba, Tsutomu; Fukui, Tsuguya

    2011-08-01

    Despite the growing importance of and interest in medical professionalism, there is no standardized tool for its measurement. The authors sought to verify the validity, reliability, and generalizability of the Professionalism Mini-Evaluation Exercise (P-MEX), a previously developed and tested tool, in the context of Japanese hospitals. A multicenter, cross-sectional evaluation study was performed to investigate the validity, reliability, and generalizability of the P-MEX in seven Japanese hospitals. In 2009-2010, 378 evaluators (attending physicians, nurses, peers, and junior residents) completed 360-degree assessments of 165 residents and fellows using the P-MEX. The content validity and criterion-related validity were examined, and the construct validity of the P-MEX was investigated by performing confirmatory factor analysis through a structural equation model. The reliability was tested using generalizability analysis. The contents of the P-MEX achieved good acceptance in a preliminary working group, and the poststudy survey revealed that 302 (79.9%) evaluators rated the P-MEX items as appropriate, indicating good content validity. The correlation coefficient between P-MEX scores and external criteria was 0.78 (P < .001), demonstrating good criterion-related validity. Confirmatory factor analysis verified high path coefficient (0.60-0.99) and adequate goodness of fit of the model. The generalizability analysis yielded a high dependability coefficient, suggesting good reliability, except when evaluators were peers or junior residents. Findings show evidence of adequate validity, reliability, and generalizability of the P-MEX in Japanese hospital settings. The P-MEX is the only evaluation tool for medical professionalism verified in both a Western and East Asian cultural context.

  11. Development and validation of the irritable bowel syndrome scale under the system of quality of life instruments for chronic diseases QLICD-IBS: combinations of classical test theory and generalizability theory.

    PubMed

    Lei, Pingguang; Lei, Guanghe; Tian, Jianjun; Zhou, Zengfen; Zhao, Miao; Wan, Chonghua

    2014-10-01

    This paper is aimed to develop the irritable bowel syndrome (IBS) scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-IBS) by the modular approach and validate it by both classical test theory and generalizability theory. The QLICD-IBS was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, and quantitative statistical procedures. One hundred twelve inpatients with IBS were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability, and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t tests and also G studies and D studies of generalizability theory analysis. Multi-trait scaling analysis, correlation, and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. Test-retest reliability coefficients (Pearson r and intra-class correlation (ICC)) for the overall score and all domains were higher than 0.80; the internal consistency α for all domains at two measurements were higher than 0.70 except for the social domain (0.55 and 0.67, respectively). The overall score and scores for all domains/facets had statistically significant changes after treatments with moderate or higher effect size standardized response mean (SRM) ranging from 0.72 to 1.02 at domain levels. G coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-IBS has good validity, reliability, responsiveness, and some highlights and can be used as the quality of life instrument for patients with IBS.

  12. Monte Carlo simulations on marker grouping and ordering.

    PubMed

    Wu, J; Jenkins, J; Zhu, J; McCarty, J; Watson, C

    2003-08-01

    Four global algorithms, maximum likelihood (ML), sum of adjacent LOD score (SALOD), sum of adjacent recombinant fractions (SARF) and product of adjacent recombinant fraction (PARF), and one approximation algorithm, seriation (SER), were used to compare the marker ordering efficiencies for correctly given linkage groups based on doubled haploid (DH) populations. The Monte Carlo simulation results indicated the marker ordering powers for the five methods were almost identical. High correlation coefficients were greater than 0.99 between grouping power and ordering power, indicating that all these methods for marker ordering were reliable. Therefore, the main problem for linkage analysis was how to improve the grouping power. Since the SER approach provided the advantage of speed without losing ordering power, this approach was used for detailed simulations. For more generality, multiple linkage groups were employed, and population size, linkage cutoff criterion, marker spacing pattern (even or uneven), and marker spacing distance (close or loose) were considered for obtaining acceptable grouping powers. Simulation results indicated that the grouping power was related to population size, marker spacing distance, and cutoff criterion. Generally, a large population size provided higher grouping power than small population size, and closely linked markers provided higher grouping power than loosely linked markers. The cutoff criterion range for achieving acceptable grouping power and ordering power differed for varying cases; however, combining all situations in this study, a cutoff criterion ranging from 50 cM to 60 cM was recommended for achieving acceptable grouping power and ordering power for different cases.

  13. Confirmatory factor analysis of different versions of the Body Shape Questionnaire applied to Brazilian university students.

    PubMed

    da Silva, Wanderson Roberto; Dias, Juliana Chioda Ribeiro; Maroco, João; Campos, Juliana Alvares Duarte Bonini

    2014-09-01

    This study aimed at evaluating the validity, reliability, and factorial invariance of the complete (34-item) and shortened (8-item and 16-item) versions of the Body Shape Questionnaire (BSQ) when applied to Brazilian university students. A total of 739 female students with a mean age of 20.44 (standard deviation=2.45) years participated. Confirmatory factor analysis was conducted to verify the degree to which the one-factor structure satisfies the proposal for the BSQ's expected structure. Two items of the 34-item version were excluded because they had factor weights (λ)<40. All models had adequate convergent validity (average variance extracted=.43-.58; composite reliability=.85-.97) and internal consistency (α=.85-.97). The 8-item B version was considered the best shortened BSQ version (Akaike information criterion=84.07, Bayes information criterion=157.75, Browne-Cudeck criterion=84.46), with strong invariance for independent samples (Δχ(2)λ(7)=5.06, Δχ(2)Cov(8)=5.11, Δχ(2)Res(16)=19.30). Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Territories typification technique with use of statistical models

    NASA Astrophysics Data System (ADS)

    Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.

    2018-05-01

    Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.

  15. Is internal friction friction?

    USGS Publications Warehouse

    Savage, J.C.; Byerlee, J.D.; Lockner, D.A.

    1996-01-01

    Mogi [1974] proposed a simple model of the incipient rupture surface to explain the Coulomb failure criterion. We show here that this model can plausibly be extended to explain the Mohr failure criterion. In Mogi's model the incipient rupture surface immediately before fracture consists of areas across which material integrity is maintained (intact areas) and areas across which it is not (cracks). The strength of the incipient rupture surface is made up of the inherent strength of the intact areas plus the frictional resistance to sliding offered by the cracked areas. Although the coefficient of internal friction (slope of the strength versus normal stress curve) depends upon both the frictional and inherent strengths, the phenomenon of internal friction can be identified with the frictional part. The curvature of the Mohr failure envelope is interpreted as a consequence of differences in damage (cracking) accumulated in prefailure loading at different confining pressures.

  16. Real-time flutter boundary prediction based on time series models

    NASA Astrophysics Data System (ADS)

    Gu, Wenjing; Zhou, Li

    2018-03-01

    For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.

  17. A non-destructive selection criterion for fibre content in jute : II. Regression approach.

    PubMed

    Arunachalam, V; Iyer, R D

    1974-01-01

    An experiment with ten populations of jute, comprising varieties and mutants of the two species Corchorus olitorius and C.capsularis was conducted at two different locations with the object of evolving an effective criterion for selecting superior single plants for fibre yield. At Delhi, variation existed only between varieties as a group and mutants as a group, while at Pusa variation also existed among the mutant populations of C. capsularis.A multiple regression approach was used to find the optimum combination of characters for prediction of fibre yield. A process of successive elimination of characters based on the coefficient of determination provided by individual regression equations was employed to arrive at the optimal set of characters for predicting fibre yield. It was found that plant height, basal and mid-diameters and basal and mid-dry fibre weights would provide such an optimal set.

  18. Development of Reliable and Validated Tools to Evaluate Technical Resuscitation Skills in a Pediatric Simulation Setting: Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics.

    PubMed

    Faudeux, Camille; Tran, Antoine; Dupont, Audrey; Desmontils, Jonathan; Montaudié, Isabelle; Bréaud, Jean; Braun, Marc; Fournier, Jean-Paul; Bérard, Etienne; Berlengi, Noémie; Schweitzer, Cyril; Haas, Hervé; Caci, Hervé; Gatin, Amélie; Giovannini-Chami, Lisa

    2017-09-01

    To develop a reliable and validated tool to evaluate technical resuscitation skills in a pediatric simulation setting. Four Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics (RESCAPE) evaluation tools were created, following international guidelines: intraosseous needle insertion, bag mask ventilation, endotracheal intubation, and cardiac massage. We applied a modified Delphi methodology evaluation to binary rating items. Reliability was assessed comparing the ratings of 2 observers (1 in real time and 1 after a video-recorded review). The tools were assessed for content, construct, and criterion validity, and for sensitivity to change. Inter-rater reliability, evaluated with Cohen kappa coefficients, was perfect or near-perfect (>0.8) for 92.5% of items and each Cronbach alpha coefficient was ≥0.91. Principal component analyses showed that all 4 tools were unidimensional. Significant increases in median scores with increasing levels of medical expertise were demonstrated for RESCAPE-intraosseous needle insertion (P = .0002), RESCAPE-bag mask ventilation (P = .0002), RESCAPE-endotracheal intubation (P = .0001), and RESCAPE-cardiac massage (P = .0037). Significantly increased median scores over time were also demonstrated during a simulation-based educational program. RESCAPE tools are reliable and validated tools for the evaluation of technical resuscitation skills in pediatric settings during simulation-based educational programs. They might also be used for medical practice performance evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Psychometric evaluation of the Persian version of the quality of life in epilepsy inventory-31

    PubMed Central

    Mohammadi, Navid; Kian, Shiva; Nia, Seyed Mohammad Ali Akbarian; Nojomi, Marzieh

    2013-01-01

    Background Health assessment in patients with epilepsy (PWE) should include both clinical outcomes and health related quality of life (HRQOL) measures. The quality of life (QoL) in epilepsy-31 inventory (QOLIE-31) is widely used for QOL studies in epilepsy. This study aims to evaluate psychometrics of the Persian version of the inventory (QOLIE-31-P). Methods Following a standard forward-backward translation and cultural adaptation, the construct validity of the QOLIE-31-P was assessed by explanatory factor analysis, multi-trait scaling analysis, and known group comparison. The criterion validity was assessed by calculating the Pearson correlation to SF-36 (36-item short-form health survey). The reliability was assessed by calculating Cronbach's alpha and test-retest study. Results The factor analysis extracted from 8 factors explaining 70.35% of the variations. Item-scale correlations revealed that individual items significantly had the strongest association with the domain they were loaded on. The Pearson coefficient of correlation between QOLIE-31-P and the overall scores of SF-36 was 0.876 (P < 0.0001). Patient with medically controlled seizures scored higher than those who experienced seizures during the previous year to study date (P < 0.0001). The Cronbach's α of overall QOLIE-31-P inventory was 0.9. The overall test-retest coefficient of correlation was 0.68 (P = 0.003). Conclusion QOLIE-31-P is a valid and reliable tool to be applied in health assessment of patients with epilepsy. PMID:24250924

  20. Effects of geometry and fluid properties during condensation in minichannels: experiments and simulations

    NASA Astrophysics Data System (ADS)

    Toninelli, Paolo; Bortolin, Stefano; Azzolin, Marco; Del, Davide, Col

    2017-10-01

    The present paper aims at investigating the condensation process inside minichannels, at low mass fluxes, where bigger discrepancies from conventional channels can be expected. At high mass flux, the condensation in minichannels is expected to be shear stress dominated. Therefore, models originally developed for conventional channels could still do a good job in predicting the heat transfer coefficient. When the mass flow rate decreases, the condensation process in minichannels starts to display differences with the same process in macro-channels. With the purpose of investigating condensation at these operating conditions, new experimental data are here reported and compared with data already published in the literature. In particular, heat transfer coefficients have been measured during R134a and R1234ze(E) condensation inside circular and square cross section minichannels at mass flux ranging between 65 and 200 kg m-2 s-1. These new data are compared with those of R32, R717, R290, R152a to show the effect of channel shape and fluid properties and to assess the applicability of correlations developed for macroscale condensation. For this purpose, a new criterion based on the Weber number is presented to decide when the macroscale condensation correlation can be applied. The present experimental data are also compared against three-dimensional Volume of Fluid (VOF) simulations of condensation in minichannels with circular and square cross section. This comparison allows to get an insight into the process and evaluate the main heat transfer mechanisms.

  1. Time Series Analysis and Forecasting of Wastewater Inflow into Bandar Tun Razak Sewage Treatment Plant in Selangor, Malaysia

    NASA Astrophysics Data System (ADS)

    Abunama, Taher; Othman, Faridah

    2017-06-01

    Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.

  2. [Domestic and international trends concerning allowable limits of error in external quality assessment scheme].

    PubMed

    Hosogaya, Shigemi; Ozaki, Yukio

    2005-06-01

    Many external quality assessment schemes (EQAS) are performed to support quality improvement of the services provided by participating laboratories for the benefits of patients. The EQAS organizer shall be responsible for ensuring that the method of evaluation is appropriate for maintenance of the credibility of the schemes. Procedures to evaluate each participating laboratory are gradually being standardized. In most cases of EQAS, the peer group mean is used as a target of accuracy, and the peer group standard deviation is used as a criterion for inter-laboratory variation. On the other hand, Fraser CG, et al. proposed desirable quality specifications for any imprecision and inaccuracies, which were derived from inter- and intra-biologic variations. We also proposed allowable limits of analytical error, being less than one-half of the average intra-individual variation for evaluation of imprecision, and less than one-quarter of the inter- plus intra-individual variation for evaluation of inaccuracy. When expressed in coefficient of variation terms, these allowable limits may be applied at a wide range of levels of quantity.

  3. High-Frequency Ultrasound M-mode Imaging for Identifying Lesion and Bubble Activity during High-Intensity Focused Ultrasound Ablation

    PubMed Central

    Kumon, R. E.; Gudur, M. S. R.; Zhou, Y.; Deng, C. X.

    2012-01-01

    Effective real-time monitoring of high-intensity focused ultrasound (HIFU) ablation is important for application of HIFU technology in interventional electrophysiology. This study investigated rapid, high-frequency M-mode ultrasound imaging for monitoring spatiotemporal changes during HIFU application. HIFU (4.33 MHz, 1 kHz PRF, 50% duty cycle, 1 s, 2600 – 6100 W/cm2) was applied to ex-vivo porcine cardiac tissue specimens with a confocally and perpendicularly aligned high-frequency imaging system (Visualsonics Vevo 770, 55 MHz center frequency). Radiofrequency (RF) data from M-mode imaging (1 kHz PRF, 2 s × 7 mm) was acquired before, during, and after HIFU treatment (n = 12). Among several strategies, the temporal maximum integrated backscatter with a threshold of +12 dB change showed the best results for identifying final lesion width (receiver-operating characteristic curve area 0.91 ± 0.04, accuracy 85 ± 8%, as compared to macroscopic images of lesions). A criterion based on a line-to-line decorrelation coefficient is proposed for identification of transient gas bodies. PMID:22341055

  4. Wavelet based detection of manatee vocalizations

    NASA Astrophysics Data System (ADS)

    Gur, Berke M.; Niezrecki, Christopher

    2005-04-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.

  5. Responsivity-based criterion for accurate calibration of FTIR emission spectra: theoretical development and bandwidth estimation.

    PubMed

    Rowe, Penny M; Neshyba, Steven P; Walden, Von P

    2011-03-14

    An analytical expression for the variance of the radiance measured by Fourier-transform infrared (FTIR) emission spectrometers exists only in the limit of low noise. Outside this limit, the variance needs to be calculated numerically. In addition, a criterion for low noise is needed to identify properly calibrated radiances and optimize the instrument bandwidth. In this work, the variance and the magnitude of a noise-dependent spectral bias are calculated as a function of the system responsivity (r) and the noise level in its estimate (σr). The criterion σr/r<0.3, applied to downwelling and upwelling FTIR emission spectra, shows that the instrument bandwidth is specified properly for one instrument but needs to be restricted for another.

  6. Model of Nanostructuring Burnishing by a Spherical Indenter Taking into Consideration Plastic Deformations

    NASA Astrophysics Data System (ADS)

    Lyashenko, Ya. A.; Popov, V. L.

    2018-01-01

    A dynamic model of the nanostructuring burnishing of a surface of metallic details taking into consideration plastic deformations has been suggested. To describe the plasticity, the ideology of dimension reduction method supplemented with the plasticity criterion is used. The model considers the action of the normal burnishing force and the tangential friction force. The effect of the coefficient of friction and the periodical oscillation of the burnishing force on the burnishing kinetics are investigated.

  7. Development, pilot testing and psychometric validation of a short version of the coronary artery disease education questionnaire: The CADE-Q SV.

    PubMed

    Ghisi, Gabriela Lima de Melo; Sandison, Nicole; Oh, Paul

    2016-03-01

    To develop, pilot test and psychometrically validate a shorter version of the coronary artery disease education questionnaire (CADE-Q), called CADE-Q SV. Based on previous versions of the CADE-Q, cardiac rehabilitation (CR) experts developed 20 items divided into 5 knowledge domains to comprise the first version of the CADE-Q SV. To establish content validity, they were reviewed by an expert panel (N=12). Refined items were pilot-tested in 20 patients, in which clarity was provided. A final version was generated and psychometrically-tested in 132CR patients. Test-retest reliability was assessed via the intraclass correlation coefficient (ICC), the internal consistency using Cronbach's alpha, and criterion validity with regard to patients' education and duration in CR. All ICC coefficients meet the minimum recommended standard. All domains were considered internally consistent (α>0.7). Criterion validity was supported by significant differences in mean scores by educational level (p<0.01) and duration in CR (p<0.05). Knowledge about exercise and nutrition was higher than knowledge about medical condition. The CADE-Q SV was demonstrated to have good reliability and validity. This is a short, quick and appropriate tool for application in clinical and research settings, assessing patients' knowledge during CR and as part of education programming. Copyright © 2015. Published by Elsevier Ireland Ltd.

  8. A classification of the galaxy groups

    NASA Technical Reports Server (NTRS)

    Anosova, Joanna P.

    1990-01-01

    A statistical criterion has been proposed to reveal the random and physical clusterings among stars, galaxies and other objects. This criterion has been applied to the galaxy triples of the list by Karachentseva, Karaschentsev and Scherbanovsky, and the double galaxies of the list by Dahari where the primary components are the Seyfert galaxies. The confident physical, probable physical, probable optical and confident optical groups have been identified. The limit difference of radial velocities of components for the confident physical multiple galaxies has also been estimated.

  9. A criterion autoscheduler for long range planning

    NASA Technical Reports Server (NTRS)

    Sponsler, Jeffrey L.

    1994-01-01

    A constraint-based scheduling system called SPIKE is used to create long-term schedules for the Hubble Space Telescope. A meta-level scheduler called the Criterion Autoscheduler for Long range planning (CASL) was created to guide SPIKE's schedule generation according to the agenda of the planning scientists. It is proposed that sufficient flexibility exists in a schedule to allow high level planning heuristics to be applied without adversely affected crucial constraints such as spacecraft efficiency. This hypothesis is supported by test data which is described.

  10. Quantifier variables of the back surface deformity obtained with a noninvasive structured light method: evaluation of their usefulness in idiopathic scoliosis diagnosis

    PubMed Central

    Buendía, Mateo; Cibrián, Rosa M.; Salvador, Rosario; Laguía, Manuel; Martín, Antonio; Gomar, Francisco

    2006-01-01

    New noninvasive techniques, amongst them structured light methods, have been applied to study rachis deformities, providing a way to evaluate external back deformities in the three planes of space. These methods are aimed at reducing the number of radiographic examinations necessary to diagnose and follow-up patients with scoliosis. By projecting a grid over the patient’s back, the corresponding software for image treatment provides a topography of the back in a color or gray scale. Visual inspection of back topographic images using this method immediately provides information about back deformity, but it is important to determine quantifier variables of the deformity to establish diagnostic criteria. In this paper, two topographic variables [deformity in the axial plane index (DAPI) and posterior trunk symmetry index (POTSI)] that quantify deformity in two different planes are analyzed. Although other authors have reported the POTSI variable, the DAPI variable proposed in this paper is innovative. The upper normality limit of these variables in a nonpathological group was determined. These two variables have different and complementary diagnostic characteristics, therefore we devised a combined diagnostic criterion: cases with normal DAPI and POTSI (DAPI ≤ 3.9% and POTSI ≤ 27.5%) were diagnosed as nonpathologic, but cases with high DAPI or POTSI were diagnosed as pathologic. When we used this criterion to analyze all the cases in the sample (56 nonpathologic and 30 with idiopathic scoliosis), we obtained 76.6% sensitivity, 91% specificity, and a positive predictive value of 82%. The interobserver, intraobserver, and interassay variability were studied by determining the variation coefficient. There was good correlation between topographic variables (DAPI and POTSI) and clinical variables (Cobb’s angle and vertebral rotation angle). PMID:16609858

  11. Development of the Japanese version of the Council on Nutrition Appetite Questionnaire and its simplified versions, and evaluation of their reliability, validity, and reproducibility.

    PubMed

    Tokudome, Yuko; Okumura, Keiko; Kumagai, Yoshiko; Hirano, Hirohiko; Kim, Hunkyung; Morishita, Shiho; Watanabe, Yutaka

    2017-11-01

    Because few Japanese questionnaires assess the elderly's appetite, there is an urgent need to develop an appetite questionnaire with verified reliability, validity, and reproducibility. We translated and back-translated the Council on Nutrition Appetite Questionnaire (CNAQ), which has eight items, into Japanese (CNAQ-J), as well as the Simplified Nutritional Appetite Questionnaire (SNAQ-J), which includes four CNAQ-J-derived items. Using structural equation modeling, we examined the CNAQ-J structure based on data of 649 Japanese elderly people in 2013, including individuals having a certain degree of cognitive impairment, and we developed the SNAQ for the Japanese elderly (SNAQ-JE) according to an exploratory factor analysis. Confirmatory factor analyses on the appetite questionnaires were conducted to probe fitting to the model. We computed Cronbach's α coefficients and criterion-referenced/-related validity figures examining associations of the three appetite battery scores with body mass index (BMI) values and with nutrition-related questionnaire values. Test-retest reproducibility of appetite tools was scrutinized over an approximately 2-week interval. An exploratory factor analysis demonstrated that the CNAQ-J was constructed of one factor (appetite), yielding the SNAQ-JE, which includes four questions derived from the CNAQ-J. The three appetite instruments showed almost equivalent fitting to the model and reproducibility. The CNAQ-J and SNAQ-JE demonstrated satisfactory reliability and significant criterion-referenced/-related validity values, including BMIs, but the SNAQ-J included a low factor-loading item, exhibited less satisfactory reliability and had a non-significant relationship to BMI. The CNAQ-J and SNAQ-JE may be applied to assess the appetite of Japanese elderly, including persons with some cognitive impairment. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  12. Ductile Crack Initiation Criterion with Mismatched Weld Joints Under Dynamic Loading Conditions.

    PubMed

    An, Gyubaek; Jeong, Se-Min; Park, Jeongung

    2018-03-01

    Brittle failure of high toughness steel structures tends to occur after ductile crack initiation/propagation. Damages to steel structures were reported in the Hanshin Great Earthquake. Several brittle failures were observed in beam-to-column connection zones with geometrical discontinuity. It is widely known that triaxial stresses accelerate the ductile fracture of steels. The study examined the effects of geometrical heterogeneity and strength mismatches (both of which elevate plastic constraints due to heterogeneous plastic straining) and loading rate on critical conditions initiating ductile fracture. This involved applying the two-parameter criterion (involving equivalent plastic strain and stress triaxiality) to estimate ductile cracking for strength mismatched specimens under static and dynamic tensile loading conditions. Ductile crack initiation testing was conducted under static and dynamic loading conditions using circumferentially notched specimens (Charpy type) with/without strength mismatches. The results indicated that the condition for ductile crack initiation using the two parameter criterion was a transferable criterion to evaluate ductile crack initiation independent of the existence of strength mismatches and loading rates.

  13. Information hidden in the velocity distribution of ions and the exact kinetic Bohm criterion

    NASA Astrophysics Data System (ADS)

    Tsankov, Tsanko V.; Czarnetzki, Uwe

    2017-05-01

    Non-equilibrium distribution functions of electrons and ions play an important role in plasma physics. A prominent example is the kinetic Bohm criterion. Since its first introduction it has been controversial for theoretical reasons and due to the lack of experimental data, in particular on the ion distribution function. Here we resolve the theoretical as well as the experimental difficulties by an exact solution of the kinetic Boltzmann equation including charge exchange collisions and ionization. This also allows for the first time non-invasive measurement of spatially resolved ion velocity distributions, absolute values of the ion and electron densities, temperatures, and mean energies as well as the electric field and the plasma potential in the entire plasma. The non-invasive access to the spatially resolved distribution functions of electrons and ions is applied to the problem of the kinetic Bohm criterion. Theoretically a so far missing term in the criterion is derived and shown to be of key importance. With the new term the validity of the kinetic criterion at high collisionality and its agreement with the fluid picture are restored. All findings are supported by experimental data, theory and a numerical model with excellent agreement throughout.

  14. Effects of dynamic heterogeneity and density scaling of molecular dynamics on the relationship among thermodynamic coefficients at the glass transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koperwas, K., E-mail: kkoperwas@us.edu.pl; Grzybowski, A.; Grzybowska, K.

    2015-07-14

    In this paper, we define and experimentally verify thermodynamic characteristics of the liquid-glass transition, taking into account a kinetic origin of the process. Using the density scaling law and the four-point measure of the dynamic heterogeneity of molecular dynamics of glass forming liquids, we investigate contributions of enthalpy, temperature, and density fluctuations to spatially heterogeneous molecular dynamics at the liquid-glass transition, finding an equation for the pressure coefficient of the glass transition temperature, dTg/dp. This equation combined with our previous formula for dTg/dp, derived solely from the density scaling criterion, implies a relationship among thermodynamic coefficients at Tg. Since thismore » relationship and both the equations for dTg/dp are very well validated using experimental data at Tg, they are promising alternatives to the classical Prigogine-Defay ratio and both the Ehrenfest equations in case of the liquid-glass transition.« less

  15. Reliability and Validity of the Japanese Version of the Kinesthetic and Visual Imagery Questionnaire (KVIQ)

    PubMed Central

    Nakano, Hideki; Kodama, Takayuki; Ukai, Kazumasa; Kawahara, Satoru; Horikawa, Shiori; Murata, Shin

    2018-01-01

    In this study, we aimed to (1) translate the English version of the Kinesthetic and Visual Imagery Questionnaire (KVIQ), which assesses motor imagery ability, into Japanese, and (2) investigate the reliability and validity of the Japanese KVIQ. We enrolled 28 healthy adults in this study. We used Cronbach’s alpha coefficients to assess reliability reflected by the internal consistency. Additionally, we assessed validity reflected by the criterion-related validity between the Japanese KVIQ and the Japanese version of the Movement Imagery Questionnaire-Revised (MIQ-R) with Spearman’s rank correlation coefficients. The Cronbach’s alpha coefficients for the KVIQ-20 were 0.88 (Visual) and 0.91 (Kinesthetic), which indicates high reliability. There was a significant positive correlation between the Japanese KVIQ-20 (Total) and the Japanese MIQ-R (Total) (r = 0.86, p < 0.01). Our results suggest that the Japanese KVIQ is an assessment that is a reliable and valid index of motor imagery ability. PMID:29724042

  16. Reliability and Validity of the Japanese Version of the Kinesthetic and Visual Imagery Questionnaire (KVIQ).

    PubMed

    Nakano, Hideki; Kodama, Takayuki; Ukai, Kazumasa; Kawahara, Satoru; Horikawa, Shiori; Murata, Shin

    2018-05-02

    In this study, we aimed to (1) translate the English version of the Kinesthetic and Visual Imagery Questionnaire (KVIQ), which assesses motor imagery ability, into Japanese, and (2) investigate the reliability and validity of the Japanese KVIQ. We enrolled 28 healthy adults in this study. We used Cronbach’s alpha coefficients to assess reliability reflected by the internal consistency. Additionally, we assessed validity reflected by the criterion-related validity between the Japanese KVIQ and the Japanese version of the Movement Imagery Questionnaire-Revised (MIQ-R) with Spearman’s rank correlation coefficients. The Cronbach’s alpha coefficients for the KVIQ-20 were 0.88 (Visual) and 0.91 (Kinesthetic), which indicates high reliability. There was a significant positive correlation between the Japanese KVIQ-20 (Total) and the Japanese MIQ-R (Total) (r = 0.86, p < 0.01). Our results suggest that the Japanese KVIQ is an assessment that is a reliable and valid index of motor imagery ability.

  17. Eye aberration analysis with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.

    1998-06-01

    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  18. Simultaneous estimation of multiple phases in digital holographic interferometry using state space analysis

    NASA Astrophysics Data System (ADS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-05-01

    A new approach is proposed for the multiple phase estimation from a multicomponent exponential phase signal recorded in multi-beam digital holographic interferometry. It is capable of providing multidimensional measurements in a simultaneous manner from a single recording of the exponential phase signal encoding multiple phases. Each phase within a small window around each pixel is appproximated with a first order polynomial function of spatial coordinates. The problem of accurate estimation of polynomial coefficients, and in turn the unwrapped phases, is formulated as a state space analysis wherein the coefficients and signal amplitudes are set as the elements of a state vector. The state estimation is performed using the extended Kalman filter. An amplitude discrimination criterion is utilized in order to unambiguously estimate the coefficients associated with the individual signal components. The performance of proposed method is stable over a wide range of the ratio of signal amplitudes. The pixelwise phase estimation approach of the proposed method allows it to handle the fringe patterns that may contain invalid regions.

  19. Modeling temperature variations in a pilot plant thermophilic anaerobic digester.

    PubMed

    Valle-Guadarrama, Salvador; Espinosa-Solares, Teodoro; López-Cruz, Irineo L; Domaschko, Max

    2011-05-01

    A model that predicts temperature changes in a pilot plant thermophilic anaerobic digester was developed based on fundamental thermodynamic laws. The methodology utilized two simulation strategies. In the first, model equations were solved through a searching routine based on a minimal square optimization criterion, from which the overall heat transfer coefficient values, for both biodigester and heat exchanger, were determined. In the second, the simulation was performed with variable values of these overall coefficients. The prediction with both strategies allowed reproducing experimental data within 5% of the temperature span permitted in the equipment by the system control, which validated the model. The temperature variation was affected by the heterogeneity of the feeding and extraction processes, by the heterogeneity of the digestate recirculation through the heating system and by the lack of a perfect mixing inside the biodigester tank. The use of variable overall heat transfer coefficients improved the temperature change prediction and reduced the effect of a non-ideal performance of the pilot plant modeled.

  20. Exploiting structure: Introduction and motivation

    NASA Technical Reports Server (NTRS)

    Xu, Zhong Ling

    1993-01-01

    Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.

  1. Isolating and Examining Sources of Suppression and Multicollinearity in Multiple Linear Regression.

    PubMed

    Beckstead, Jason W

    2012-03-30

    The presence of suppression (and multicollinearity) in multiple regression analysis complicates interpretation of predictor-criterion relationships. The mathematical conditions that produce suppression in regression analysis have received considerable attention in the methodological literature but until now nothing in the way of an analytic strategy to isolate, examine, and remove suppression effects has been offered. In this article such an approach, rooted in confirmatory factor analysis theory and employing matrix algebra, is developed. Suppression is viewed as the result of criterion-irrelevant variance operating among predictors. Decomposition of predictor variables into criterion-relevant and criterion-irrelevant components using structural equation modeling permits derivation of regression weights with the effects of criterion-irrelevant variance omitted. Three examples with data from applied research are used to illustrate the approach: the first assesses child and parent characteristics to explain why some parents of children with obsessive-compulsive disorder accommodate their child's compulsions more so than do others, the second examines various dimensions of personal health to explain individual differences in global quality of life among patients following heart surgery, and the third deals with quantifying the relative importance of various aptitudes for explaining academic performance in a sample of nursing students. The approach is offered as an analytic tool for investigators interested in understanding predictor-criterion relationships when complex patterns of intercorrelation among predictors are present and is shown to augment dominance analysis.

  2. Seismic Rheological Model and Reflection Coefficients of the Brittle-Ductile Transition

    NASA Astrophysics Data System (ADS)

    Carcione, José M.; Poletto, Flavio

    2013-12-01

    It is well established that the upper—cooler—part of the crust is brittle, while deeper zones present ductile behaviour. In some cases, this brittle-ductile transition is a single seismic reflector with an associated reflection coefficient. We first develop a stress-strain relation including the effects of crust anisotropy, seismic attenuation and ductility in which deformation takes place by shear plastic flow. Viscoelastic anisotropy is based on the eigenstrain model and the Zener and Burgers mechanical models are used to model the effects of seismic attenuation, velocity dispersion, and steady-state creep flow, respectively. The stiffness components of the brittle and ductile media depend on stress and temperature through the shear viscosity, which is obtained by the Arrhenius equation and the octahedral stress criterion. The P- and S-wave velocities decrease as depth and temperature increase due to the geothermal gradient, an effect which is more pronounced for shear waves. We then obtain the reflection and transmission coefficients of a single brittle-ductile interface and of a ductile thin layer. The PP scattering coefficient has a Brewster angle (a sign change) in both cases, and there is substantial PS conversion at intermediate angles. The PP coefficient is sensitive to the layer thickness, unlike the SS coefficient. Thick layers have a well-defined Brewster angle and show higher reflection amplitudes. Finally, we compute synthetic seismograms in a homogeneous medium as a function of temperature.

  3. Multiscale analysis of potential fields by a ridge consistency criterion: the reconstruction of the Bishop basement

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.; Cascone, L.

    2012-01-01

    We use a multiscale approach as a semi-automated interpreting tool of potential fields. The depth to the source and the structural index are estimated in two steps: first the depth to the source, as the intersection of the field ridges (lines built joining the extrema of the field at various altitudes) and secondly, the structural index by the scale function. We introduce a new criterion, called 'ridge consistency' in this strategy. The criterion is based on the principle that the structural index estimations on all the ridges converging towards the same source should be consistent. If these estimates are significantly different, field differentiation is used to lessen the interference effects from nearby sources or regional fields, to obtain a consistent set of estimates. In our multiscale framework, vertical differentiation is naturally joint to the low-pass filtering properties of the upward continuation, so is a stable process. Before applying our criterion, we studied carefully the errors on upward continuation caused by the finite size of the survey area. To this end, we analysed the complex magnetic synthetic case, known as Bishop model, and evaluated the best extrapolation algorithm and the optimal width of the area extension, needed to obtain accurate upward continuation. Afterwards, we applied the method to the depth estimation of the whole Bishop basement bathymetry. The result is a good reconstruction of the complex basement and of the shape properties of the source at the estimated points.

  4. Promoted Combustion Test Data Re-Examined

    NASA Technical Reports Server (NTRS)

    Lewis, Michelle; Jeffers, Nathan; Stoltzfus, Joel

    2010-01-01

    Promoted combustion testing of metallic materials has been performed by NASA since the mid-1980s to determine the burn resistance of materials in oxygen-enriched environments. As the technolo gy has advanced, the method of interpreting, presenting, and applying the promoted combustion data has advanced as well. Recently NASA changed the bum criterion from 15 cm (6 in.) to 3 cm (1.2 in.). This new burn criterion was adopted for ASTM G 124, Standard Test Method for Determining the Combustion Behavior- of Metallic Materials in Oxygen-Enriched Atmospheres. Its effect on the test data and the latest method to display the test data will be discussed. Two specific examples that illustrate how this new criterion affects the burn/no-bum thresholds of metal alloys will also be presented.

  5. Decision Criterion Dynamics in Animals Performing an Auditory Detection Task

    PubMed Central

    Mill, Robert W.; Alves-Pinto, Ana; Sumner, Christian J.

    2014-01-01

    Classical signal detection theory attributes bias in perceptual decisions to a threshold criterion, against which sensory excitation is compared. The optimal criterion setting depends on the signal level, which may vary over time, and about which the subject is naïve. Consequently, the subject must optimise its threshold by responding appropriately to feedback. Here a series of experiments was conducted, and a computational model applied, to determine how the decision bias of the ferret in an auditory signal detection task tracks changes in the stimulus level. The time scales of criterion dynamics were investigated by means of a yes-no signal-in-noise detection task, in which trials were grouped into blocks that alternately contained easy- and hard-to-detect signals. The responses of the ferrets implied both long- and short-term criterion dynamics. The animals exhibited a bias in favour of responding “yes” during blocks of harder trials, and vice versa. Moreover, the outcome of each single trial had a strong influence on the decision at the next trial. We demonstrate that the single-trial and block-level changes in bias are a manifestation of the same criterion update policy by fitting a model, in which the criterion is shifted by fixed amounts according to the outcome of the previous trial and decays strongly towards a resting value. The apparent block-level stabilisation of bias arises as the probabilities of outcomes and shifts on single trials mutually interact to establish equilibrium. To gain an intuition into how stable criterion distributions arise from specific parameter sets we develop a Markov model which accounts for the dynamic effects of criterion shifts. Our approach provides a framework for investigating the dynamics of decisions at different timescales in other species (e.g., humans) and in other psychological domains (e.g., vision, memory). PMID:25485733

  6. Maximum number of live births per donor in artificial insemination.

    PubMed

    Wang, Charlotte; Tsai, Miao-Yu; Lee, Mei-Hsien; Huang, Su-Yun; Kao, Chen-Hung; Ho, Hong-Nerng; Hsiao, Chuhsing Kate

    2007-05-01

    The maximal number of live births (k) per donor was usually determined by cultural and social perspective. It was rarely decided on the basis of scientific evidence or discussed from mathematical or probabilistic viewpoint. To recommend a value for k, we propose three criteria to evaluate its impact on consanguinity and disease incidence due to artificial insemination by donor (AID). The first approach considers the optimization of k under the criterion of fixed tolerable number of consanguineous mating due to AID. The second approach optimizes k under fixed allowable average coefficient of inbreeding. This approach is particularly helpful when assessing the impact on the public, is of interest. The third criterion considers specific inheritance diseases. This approach is useful when evaluating the individual's risk of genetic diseases. When different diseases are considered, this criterion can be easily adopted. All these derivations are based on the assumption of shortage of gamete donors due to great demand and insufficient supply. Our results indicate that strong degree of assortative mating, small population size and insufficient supply in gamete donors will lead to greater risk of consanguinity. Recommendations under other settings are also tabulated for reference. A web site for calculating the limit for live births per donor is available.

  7. Putting the Biological Species Concept to the Test: Using Mating Networks to Delimit Species

    PubMed Central

    Lagache, Lélia; Leger, Jean-Benoist; Daudin, Jean-Jacques; Petit, Rémy J.; Vacher, Corinne

    2013-01-01

    Although interfertility is the key criterion upon which Mayr’s biological species concept is based, it has never been applied directly to delimit species under natural conditions. Our study fills this gap. We used the interfertility criterion to delimit two closely related oak species in a forest stand by analyzing the network of natural mating events between individuals. The results reveal two groups of interfertile individuals connected by only few mating events. These two groups were largely congruent with those determined using other criteria (morphological similarity, genotypic similarity and individual relatedness). Our study, therefore, shows that the analysis of mating networks is an effective method to delimit species based on the interfertility criterion, provided that adequate network data can be assembled. Our study also shows that although species boundaries are highly congruent across methods of species delimitation, they are not exactly the same. Most of the differences stem from assignment of individuals to an intermediate category. The discrepancies between methods may reflect a biological reality. Indeed, the interfertility criterion is an environment-dependant criterion as species abundances typically affect rates of hybridization under natural conditions. Thus, the methods of species delimitation based on the interfertility criterion are expected to give results slightly different from those based on environment-independent criteria (such as the genotypic similarity criteria). However, whatever the criterion chosen, the challenge we face when delimiting species is to summarize continuous but non-uniform variations in biological diversity. The grade of membership model that we use in this study appears as an appropriate tool. PMID:23818990

  8. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility. Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility. End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test). When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility. PMID:24570599

  9. Importance of Tensile Strength on the Shear Behavior of Discontinuities

    NASA Astrophysics Data System (ADS)

    Ghazvinian, A. H.; Azinfar, M. J.; Geranmayeh Vaneghi, R.

    2012-05-01

    In this study, the shear behavior of discontinuities possessing two different rock wall types with distinct separate compressive strengths was investigated. The designed profiles consisted of regular artificial joints molded by five types of plaster mortars, each representing a distinct uniaxial compressive strength. The compressive strengths of plaster specimens ranged from 5.9 to 19.5 MPa. These specimens were molded considering a regular triangular asperity profile and were designed so as to achieve joint walls with different strength material combinations. The results showed that the shear behavior of discontinuities possessing different joint wall compressive strengths (DDJCS) tested under constant normal load (CNL) conditions is the same as those possessing identical joint wall strengths, but the shear strength of DDJCS is governed by minor joint wall compressive strength. In addition, it was measured that the predicted values obtained by Barton's empirical criterion are greater than the experimental results. The finding indicates that there is a correlation between the joint roughness coefficient (JRC), normal stress, and mechanical strength. It was observed that the mode of failure of asperities is either pure tensile, pure shear, or a combination of both. Therefore, Barton's strength criterion, which considers the compressive strength of joint walls, was modified by substituting the compressive strength with the tensile strength. The validity of the modified criterion was examined by the comparison of the predicted shear values with the laboratory shear test results reported by Grasselli (Ph.D. thesis n.2404, Civil Engineering Department, EPFL, Lausanne, Switzerland, 2001). These comparisons infer that the modified criterion can predict the shear strength of joints more precisely.

  10. Anthropometry as a predictor of bench press performance done at different loads.

    PubMed

    Caruso, John F; Taylor, Skyler T; Lutz, Brant M; Olson, Nathan M; Mason, Melissa L; Borgsmiller, Jake A; Riner, Rebekah D

    2012-09-01

    The purpose of our study was to examine the ability of anthropometric variables (body mass, total arm length, biacromial width) to predict bench press performance at both maximal and submaximal loads. Our methods required 36 men to visit our laboratory and submit to anthropometric measurements, followed by lifting as much weight as possible in good form one time (1 repetition maximum, 1RM) in the exercise. They made 3 more visits in which they performed 4 sets of bench presses to volitional failure at 1 of 3 (40, 55, or 75% 1RM) submaximal loads. An accelerometer (Myotest Inc., Royal Oak MI) measured peak force, velocity, and power after each submaximal load set. With stepwise multivariate regression, our 3 anthropometric variables attempted to explain significant amounts of variance for 13 bench press performance indices. For criterion measures that reached significance, separate Pearson product moment correlation coefficients further assessed if the strength of association each anthropometric variable had with the criterion was also significant. Our analyses showed that anthropometry explained significant amounts (p < 0.05) of variance for 8 criterion measures. It was concluded that body mass had strong univariate correlations with 1RM and force-related measures, total arm length was moderately associated with 1RM and criterion variables at the lightest load, whereas biacromial width had an inverse relationship with the peak number of repetitions performed per set at the 2 lighter loads. Practical applications suggest results may help coaches and practitioners identify anthropometric features that may best predict various measures of bench press prowess in athletes.

  11. Reliability and criterion validity of two applications of the iPhone™ to measure cervical range of motion in healthy participants

    PubMed Central

    2013-01-01

    Summary of background data Recent smartphones, such as the iPhone, are often equipped with an accelerometer and magnetometer, which, through software applications, can perform various inclinometric functions. Although these applications are intended for recreational use, they have the potential to measure and quantify range of motion. The purpose of this study was to estimate the intra and inter-rater reliability as well as the criterion validity of the clinometer and compass applications of the iPhone in the assessment cervical range of motion in healthy participants. Methods The sample consisted of 28 healthy participants. Two examiners measured cervical range of motion of each participant twice using the iPhone (for the estimation of intra and inter-reliability) and once with the CROM (for the estimation of criterion validity). Estimates of reliability and validity were then established using the intraclass correlation coefficient (ICC). Results We observed a moderate intra-rater reliability for each movement (ICC = 0.65-0.85) but a poor inter-rater reliability (ICC < 0.60). For the criterion validity, the ICCs are moderate (>0.50) to good (>0.65) for movements of flexion, extension, lateral flexions and right rotation, but poor (<0.50) for the movement left rotation. Conclusion We found good intra-rater reliability and lower inter-rater reliability. When compared to the gold standard, these applications showed moderate to good validity. However, before using the iPhone as an outcome measure in clinical settings, studies should be done on patients presenting with cervical problems. PMID:23829201

  12. Measuring physical activity in young people with cerebral palsy: validity and reliability of the ActivPAL™ monitor.

    PubMed

    Bania, Theofani

    2014-09-01

    We determined the criterion validity and the retest reliability of the ΑctivPAL™ monitor in young people with diplegic cerebral palsy (CP). Activity monitor data were compared with the criterion of video recording for 10 participants. For the retest reliability, activity monitor data were collected from 24 participants on two occasions. Participants had to have diplegic CP and be between 14 and 22 years of age. They also had to be of Gross Motor Function Classification System level II or III. Outcomes were time spent in standing, number of steps (physical activity) and time spent in sitting (sedentary behaviour). For criterion validity, coefficients of determination were all high (r(2)  ≥ 0.96), and limits of group agreement were relatively narrow, but limits of agreement for individuals were narrow only for number of steps (≥5.5%). Relative reliability was high for number of steps (intraclass correlation coefficient = 0.87) and moderate for time spent in sitting and lying, and time spent in standing (intraclass correlation coefficients = 0.60-0.66). For groups, changes of up to 7% could be due to measurement error with 95% confidence, but for individuals, changes as high as 68% could be due to measurement error. The results support the criterion validity and the retest reliability of the ActivPAL™ to measure physical activity and sedentary behaviour in groups of young people with diplegic CP but not in individuals. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Evidence for Response Bias as a Source of Error Variance in Applied Assessment

    ERIC Educational Resources Information Center

    McGrath, Robert E.; Mitchell, Matthew; Kim, Brian H.; Hough, Leaetta

    2010-01-01

    After 100 years of discussion, response bias remains a controversial topic in psychological measurement. The use of bias indicators in applied assessment is predicated on the assumptions that (a) response bias suppresses or moderates the criterion-related validity of substantive psychological indicators and (b) bias indicators are capable of…

  14. A Void Growth Failure Criterion Applied to Dynamically and Statically Loaded Thin Rings.

    DTIC Science & Technology

    1980-06-01

    the physical evidences, several other investigators (Berg, 1969, Nagpal , et al., 1972) working on the continuum aspect of failure, considered plastic...by the Growth of Holes", J. of Applied Mechanics, Vol. 35, 1968, p. 363. 23.) Nagpal , V., Mcclintock, F. A., Berg, C. A., and Subudhi, M., "Traction

  15. 49 CFR Appendix D to Part 192 - Criteria for Cathodic Protection and Determination of Measurements

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... I. Criteria for cathodic protection— A. Steel, cast iron, and ductile iron structures. (1) A... accordance with sections II and IV of this appendix. This criterion of voltage shift applies to structures... into the structure surface as measured by an earth current technique applied at predetermined current...

  16. 49 CFR Appendix D to Part 192 - Criteria for Cathodic Protection and Determination of Measurements

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... I. Criteria for cathodic protection— A. Steel, cast iron, and ductile iron structures. (1) A... accordance with sections II and IV of this appendix. This criterion of voltage shift applies to structures... into the structure surface as measured by an earth current technique applied at predetermined current...

  17. 49 CFR Appendix D to Part 192 - Criteria for Cathodic Protection and Determination of Measurements

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... I. Criteria for cathodic protection— A. Steel, cast iron, and ductile iron structures. (1) A... accordance with sections II and IV of this appendix. This criterion of voltage shift applies to structures... into the structure surface as measured by an earth current technique applied at predetermined current...

  18. 49 CFR Appendix D to Part 192 - Criteria for Cathodic Protection and Determination of Measurements

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... I. Criteria for cathodic protection— A. Steel, cast iron, and ductile iron structures. (1) A... accordance with sections II and IV of this appendix. This criterion of voltage shift applies to structures... into the structure surface as measured by an earth current technique applied at predetermined current...

  19. Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data

    NASA Astrophysics Data System (ADS)

    Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.

    2017-12-01

    We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).

  20. A Deeper Understanding of Stability in the Solar Wind: Applying Nyquist's Instability Criterion to Wind Faraday Cup Data

    NASA Astrophysics Data System (ADS)

    Alterman, B. L.; Klein, K. G.; Verscharen, D.; Stevens, M. L.; Kasper, J. C.

    2017-12-01

    Long duration, in situ data sets enable large-scale statistical analysis of free-energy-driven instabilities in the solar wind. The plasma beta and temperature anisotropy plane provides a well-defined parameter space in which a single-fluid plasma's stability can be represented. Because this reduced parameter space can only represent instability thresholds due to the free energy of one ion species - typically the bulk protons - the true impact of instabilities on the solar wind is under estimated. Nyquist's instability criterion allows us to systematically account for other sources of free energy including beams, drifts, and additional temperature anisotropies. Utilizing over 20 years of Wind Faraday cup and magnetic field observations, we have resolved the bulk parameters for three ion populations: the bulk protons, beam protons, and alpha particles. Applying Nyquist's criterion, we calculate the number of linearly growing modes supported by each spectrum and provide a more nuanced consideration of solar wind stability. Using collisional age measurements, we predict the stability of the solar wind close to the sun. Accounting for the free-energy from the three most common ion populations in the solar wind, our approach provides a more complete characterization of solar wind stability.

  1. Irwin's conjecture: Crack shape adaptability in transversely isotropic solids

    NASA Astrophysics Data System (ADS)

    Laubie, Hadrien; Ulm, Franz-Josef

    2014-08-01

    The planar crack propagation problem of a flat elliptical crack embedded in a brittle elastic anisotropic solid is investigated. We introduce the concept of crack shape adaptability: the ability of three-dimensional planar cracks to shape with the mechanical properties of a cracked body. A criterion based on the principle of maximum dissipation is suggested in order to determine the most stable elliptical shape. This criterion is applied to the specific case of vertical cracks in transversely isotropic solids. It is shown that contrary to the isotropic case, the circular shape (i.e. penny-shaped cracks) is not the most stable one. Upon propagation, the crack first grows non-self-similarly before it reaches a stable shape. This stable shape can be approximated by an ellipse of an aspect ratio that varies with the degree of elastic anisotropy. By way of example, we apply the so-derived crack shape adaptability criterion to shale materials. For this class of materials it is shown that once the stable shape is reached, the crack propagates at a higher rate in the horizontal direction than in the vertical direction. We also comment on the possible implications of these findings for hydraulic fracturing operations.

  2. Identifying dyspepsia in the Greek population: translation and validation of a questionnaire.

    PubMed

    Anastasiou, Foteini; Antonakis, Nikos; Chaireti, Georgia; Theodorakis, Pavlos N; Lionis, Christos

    2006-03-04

    Studies on clinical issues, including diagnostic strategies, are considered to be the core content of general practice research. The use of standardised instruments is regarded as an important component for the development of Primary Health Care research capacity. Demand for epidemiological cross-cultural comparisons in the international setting and the use of common instruments and definitions valid to each culture is bigger than ever. Dyspepsia is a common complaint in primary practice but little is known with respect to its incidence in Greece. There are some references about the Helicobacter Pylori infection in patients with functional dyspepsia or gastric ulcer in Greece but there is no specific instrument for the identification of dyspepsia. This paper reports on the validation and translation into Greek, of an English questionnaire for the identification of dyspepsia in the general population and discusses several possibilities of its use in the Greek primary care. The selected English postal questionnaire for the identification of people with dyspepsia in the general population consists of 30 items and was developed in 1995. The translation and cultural adaptation of the questionnaire has been performed according to international standards. For the validation of the instrument the internal consistency of the items was established using the alpha coefficient of Chronbach, the reproducibility (test - retest reliability) was measured by kappa correlation coefficient and the criterion validity was calculated against the diagnosis of the patients' records using also kappa correlation coefficient. The final Greek version of the postal questionnaire for the identification of dyspepsia in the general population was reliably translated. The internal consistency of the questionnaire was good, Chronbach's alpha was found to be 0.88 (95% CI: 0.81-0.93), suggesting that all items were appropriate to measure. Kappa coefficient for reproducibility (test - retest reliability) was found 0.66 (95% CI: 0.62-0.71), whereas the kappa analysis for criterion validity was 0.63 (95% CI: 0.36-0.89). This study indicates that the Greek translation is comparable with the English-language version in terms of validity and reliability, and is suitable for epidemiological research within the Greek primary health care setting.

  3. Discrete-time BAM neural networks with variable delays

    NASA Astrophysics Data System (ADS)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  4. Condition for confinement in non-Abelian gauge theories

    NASA Astrophysics Data System (ADS)

    Chaichian, Masud; Frasca, Marco

    2018-06-01

    We show that a criterion for confinement, based on the BRST invariance, holds in four dimensions, by solving a non-Abelian gauge theory with a set of exact solutions. The confinement condition we consider was obtained by Kugo and Ojima some decades ago. The current understanding of gauge theories permits us to apply the techniques straightforwardly for checking the validity of this criterion. In this way, we are able to show that the non-Abelian gauge theory is confining and that confinement is rooted in the BRST invariance and asymptotic freedom.

  5. Universal first-order reliability concept applied to semistatic structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  6. Universal first-order reliability concept applied to semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-07-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  7. Enhanced production of laccase from Coriolus versicolor NCIM 996 by nutrient optimization using response surface methodology.

    PubMed

    Arockiasamy, Santhiagu; Krishnan, Indira Packialakshmi Gurusamy; Anandakrishnan, Nimalanandan; Seenivasan, Sabitha; Sambath, Agalya; Venkatasubramani, Janani Priya

    2008-12-01

    Plackett and Burman design criterion and central composite design were applied successfully for enhanced production of laccase by Coriolus versicolor NCIM 996 for the first time. Plackett and Burman design criterion was applied to screen the significance of ten nutrients on laccase production by C. versicolor NCIM 996. Out of the ten nutrients tested, starch, yeast extract, MnSO(4), MgSO(4) x 7H(2)O, and phenol were found to have significant effect on laccase production. A central composite design was applied to determine the optimum concentrations of the significant variables obtained from Plackett-Burman design. The optimized medium composition for production of laccase was (g/l): starch, 30.0; yeast extract, 4.53; MnSO(4), 0.002; MgSO(4) x 7H(2)O, 0.755; and phenol, 0.026, and the optimum laccase production was 6,590.26 (U/l), which was 7.6 times greater than the control.

  8. A gradient-based model parametrization using Bernstein polynomials in Bayesian inversion of surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan

    2017-10-01

    This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.

  9. Predicting the incidence of hand, foot and mouth disease in Sichuan province, China using the ARIMA model.

    PubMed

    Liu, L; Luan, R S; Yin, F; Zhu, X P; Lü, Q

    2016-01-01

    Hand, foot and mouth disease (HFMD) is an infectious disease caused by enteroviruses, which usually occurs in children aged <5 years. In China, the HFMD situation is worsening, with increasing number of cases nationwide. Therefore, monitoring and predicting HFMD incidence are urgently needed to make control measures more effective. In this study, we applied an autoregressive integrated moving average (ARIMA) model to forecast HFMD incidence in Sichuan province, China. HFMD infection data from January 2010 to June 2014 were used to fit the ARIMA model. The coefficient of determination (R 2), normalized Bayesian Information Criterion (BIC) and mean absolute percentage of error (MAPE) were used to evaluate the goodness-of-fit of the constructed models. The fitted ARIMA model was applied to forecast the incidence of HMFD from April to June 2014. The goodness-of-fit test generated the optimum general multiplicative seasonal ARIMA (1,0,1) × (0,1,0)12 model (R 2 = 0·692, MAPE = 15·982, BIC = 5·265), which also showed non-significant autocorrelations in the residuals of the model (P = 0·893). The forecast incidence values of the ARIMA (1,0,1) × (0,1,0)12 model from July to December 2014 were 4103-9987, which were proximate forecasts. The ARIMA model could be applied to forecast HMFD incidence trend and provide support for HMFD prevention and control. Further observations should be carried out continually into the time sequence, and the parameters of the models could be adjusted because HMFD incidence will not be absolutely stationary in the future.

  10. A general rough-surface inversion algorithm: Theory and application to SAR data

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  11. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.

  12. A characterization of positive linear maps and criteria of entanglement for quantum states

    NASA Astrophysics Data System (ADS)

    Hou, Jinchuan

    2010-09-01

    Let H and K be (finite- or infinite-dimensional) complex Hilbert spaces. A characterization of positive completely bounded normal linear maps from {\\mathcal B}(H) into {\\mathcal B}(K) is given, which particularly gives a characterization of positive elementary operators including all positive linear maps between matrix algebras. This characterization is then applied to give a representation of quantum channels (operations) between infinite-dimensional systems. A necessary and sufficient criterion of separability is given which shows that a state ρ on HotimesK is separable if and only if (ΦotimesI)ρ >= 0 for all positive finite-rank elementary operators Φ. Examples of NCP and indecomposable positive linear maps are given and are used to recognize some entangled states that cannot be recognized by the PPT criterion and the realignment criterion.

  13. Interest in Aesthetic Rhinoplasty Scale.

    PubMed

    Naraghi, Mohsen; Atari, Mohammad

    2017-04-01

    Interest in cosmetic surgery is increasing, with rhinoplasty being one of the most popular surgical procedures. It is essential that surgeons identify patients with existing psychological conditions before any procedure. This study aimed to develop and validate the Interest in Aesthetic Rhinoplasty Scale (IARS). Four studies were conducted to develop the IARS and to evaluate different indices of validity (face, content, construct, criterion, and concurrent validities) and reliability (internal consistency, split-half coefficient, and temporal stability) of the scale. The four study samples included a total of 463 participants. Statistical analysis revealed satisfactory psychometric properties in all samples. Scores on the IARS were negatively correlated with self-esteem scores ( r  = -0.296; p  < 0.01) and positively associated with scores for psychopathologic symptoms ( r  = 0.164; p  < 0.05), social dysfunction ( r  = 0.268; p  < 0.01), and depression ( r  = 0.308; p  < 0.01). The internal and test-retest coefficients of consistency were found to be high (α = 0.93; intraclass coefficient = 0.94). Rhinoplasty patients were found to have significantly higher IARS scores than nonpatients ( p  < 0.001). Findings of the present studies provided evidence for face, content, construct, criterion, and concurrent validities and internal and test-retest reliability of the IARS. This evidence supports the use of the scale in clinical and research settings. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  14. Quantitative Evaluation of Head and Neck Cancer Treatment-Related Dysphagia in the Development of a Personalized Treatment Deintensification Paradigm.

    PubMed

    Quon, Harry; Hui, Xuan; Cheng, Zhi; Robertson, Scott; Peng, Luke; Bowers, Michael; Moore, Joseph; Choflet, Amanda; Thompson, Alex; Muse, Mariah; Kiess, Ana; Page, Brandi; Fakhry, Carole; Gourin, Christine; O'Hare, Jolyne; Graham, Peter; Szczesniak, Michal; Maclean, Julia; Cook, Ian; McNutt, Todd

    2017-12-01

    To test the hypothesis that quantifying swallow function with multiple patient-reported outcome (PRO) instruments is an important strategy to yield insights in the development of personalized deintensified therapies seeking to reduce the risk of head and neck cancer (HNC) treatment-related dysphagia (HNCTD). Irradiated HNC subjects seen in follow-up care (April 2015 to December 2015) who prospectively completed the Sydney Swallow Questionnaire (SSQ) and the MD Anderson Dysphagia Inventory (MDADI) concurrently on the web interface to our Oncospace database were evaluated. A correlation matrix quantified the relationship between the SSQ and MDADI. Machine-learning unsupervised cluster analysis using the elbow criterion and CLUSPLOT analysis to establish its validity was performed. We identified 89 subjects. The MDADI and SSQ scores were moderately but significantly correlated (correlation coefficient -0.69). K-means cluster analysis demonstrated that 3 unique statistical cohorts (elbow criterion) could be identified with CLUSPLOT analysis, confirming that 100% of variances were accounted for. Correlation coefficients between the individual items in the SSQ and the MDADI demonstrated weak to moderate negative correlation, except for SSQ17 (quality of life question). Pilot analysis demonstrates that the MDADI and SSQ are complementary. Three unique clusters of patients can be defined, suggesting that a unique dysphagia signature for HNCTD may be definable. Longitudinal studies relying on only a single PRO, such as MDADI, may be inadequate for classifying HNCTD. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. An enhanced version of a bone-remodelling model based on the continuum damage mechanics theory.

    PubMed

    Mengoni, M; Ponthot, J P

    2015-01-01

    The purpose of this work was to propose an enhancement of Doblaré and García's internal bone remodelling model based on the continuum damage mechanics (CDM) theory. In their paper, they stated that the evolution of the internal variables of the bone microstructure, and its incidence on the modification of the elastic constitutive parameters, may be formulated following the principles of CDM, although no actual damage was considered. The resorption and apposition criteria (similar to the damage criterion) were expressed in terms of a mechanical stimulus. However, the resorption criterion is lacking a dimensional consistency with the remodelling rate. We propose here an enhancement to this resorption criterion, insuring the dimensional consistency while retaining the physical properties of the original remodelling model. We then analyse the change in the resorption criterion hypersurface in the stress space for a two-dimensional (2D) analysis. We finally apply the new formulation to analyse the structural evolution of a 2D femur. This analysis gives results consistent with the original model but with a faster and more stable convergence rate.

  16. Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis.

    PubMed

    Grigoras, Catalin

    2007-04-11

    This article reports on the electric network frequency criterion as a means of assessing the integrity of digital audio/video evidence and forensic IT and telecommunication analysis. A brief description is given to different ENF types and phenomena that determine ENF variations. In most situations, to reach a non-authenticity opinion, the visual inspection of spectrograms and comparison with an ENF database are enough. A more detailed investigation, in the time domain, requires short time windows measurements and analyses. The stability of the ENF over geographical distances has been established by comparison of synchronized recordings made at different locations on the same network. Real cases are presented, in which the ENF criterion was used to investigate audio and video files created with secret surveillance systems, a digitized audio/video recording and a TV broadcasted reportage. By applying the ENF Criterion in forensic audio/video analysis, one can determine whether and where a digital recording has been edited, establish whether it was made at the time claimed, and identify the time and date of the registering operation.

  17. Using Bayesian hierarchical models to better understand nitrate sources and sinks in agricultural watersheds.

    PubMed

    Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan

    2016-11-15

    Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2  = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Comparison of Selection Procedures and Validation of Criterion Used in Selection of Significant Control Variates of a Simulation Model

    DTIC Science & Technology

    1990-03-01

    and M.H. Knuter. Applied Linear Regression Models. Homewood IL: Richard D. Erwin Inc., 1983. Pritsker, A. Alan B. Introduction to Simulation and SLAM...Control Variates in Simulation," European Journal of Operational Research, 42: (1989). Neter, J., W. Wasserman, and M.H. Xnuter. Applied Linear Regression Models

  19. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    NASA Astrophysics Data System (ADS)

    Tan, Yuan; Günthner, Willibald A.; Kessler, Stephan; Zhang, Lu

    2017-06-01

    As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR), such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  20. Extreme magnitude earthquakes and their economical impact: The Mexico City case

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Mario, C.

    2005-12-01

    The consequences (estimated by the human and economical losses) of the recent occurrence (worldwide) of extreme magnitude (for the region under consideration) earthquakes, such as the 19 09 1985 in Mexico (Ritchter magnitude Ms 8.1, moment magnitude Mw 8.01), or the one in Indonesia of the 26 12 2004 (Ms 9.4, Mw 9.3), stress the importance of performing seismic hazard analysis that, specifically, incorporate this possibility. Herewith, we present and apply a methodology, based on plausible extreme seismic scenarios and the computation of their associated synthetic accelerograms, to estimate the seismic hazard on Mexico City (MC) stiff and compressible surficial soils. The uncertainties about the characteristics of the potential finite seismic sources, as well as those related to the dynamic properties of MC compressible soils are taken into account. The economic consequences (i.e. the seismic risk = seismic hazard x economic cost) implicit in the seismic coefficients proposed in MC seismic Codes before (1976) and after the 1985 earthquake (2004) are analyzed. Based on the latter and on an acceptable risk criterion, a maximum seismic coefficient (MSC) of 1.4g (g = 9.81m/s2) of the elastic acceleration design spectra (5 percent damping), which has a probability of exceedance of 2.4 x 10-4, seems to be appropriate for analyzing the seismic behavior of infrastructure located on MC compressible soils, if extreme Mw 8.5 subduction thrust mechanism earthquakes (similar to the one occurred on 19 09 1985 with an observed, equivalent, MSC of 1g) occurred in the next 50 years.

  1. Cross multivariate correlation coefficients as screening tool for analysis of concurrent EEG-fMRI recordings.

    PubMed

    Ji, Hong; Petro, Nathan M; Chen, Badong; Yuan, Zejian; Wang, Jianji; Zheng, Nanning; Keil, Andreas

    2018-02-06

    Over the past decade, the simultaneous recording of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) data has garnered growing interest because it may provide an avenue towards combining the strengths of both imaging modalities. Given their pronounced differences in temporal and spatial statistics, the combination of EEG and fMRI data is however methodologically challenging. Here, we propose a novel screening approach that relies on a Cross Multivariate Correlation Coefficient (xMCC) framework. This approach accomplishes three tasks: (1) It provides a measure for testing multivariate correlation and multivariate uncorrelation of the two modalities; (2) it provides criterion for the selection of EEG features; (3) it performs a screening of relevant EEG information by grouping the EEG channels into clusters to improve efficiency and to reduce computational load when searching for the best predictors of the BOLD signal. The present report applies this approach to a data set with concurrent recordings of steady-state-visual evoked potentials (ssVEPs) and fMRI, recorded while observers viewed phase-reversing Gabor patches. We test the hypothesis that fluctuations in visuo-cortical mass potentials systematically covary with BOLD fluctuations not only in visual cortical, but also in anterior temporal and prefrontal areas. Results supported the hypothesis and showed that the xMCC-based analysis provides straightforward identification of neurophysiological plausible brain regions with EEG-fMRI covariance. Furthermore xMCC converged with other extant methods for EEG-fMRI analysis. © 2018 The Authors Journal of Neuroscience Research Published by Wiley Periodicals, Inc.

  2. Mixed-Mode Decohesion Finite Elements for the Simulation of Delamination in Composite Materials

    NASA Technical Reports Server (NTRS)

    Camanho, Pedro P.; Davila, Carlos G.

    2002-01-01

    A new decohesion element with mixed-mode capability is proposed and demonstrated. The element is used at the interface between solid finite elements to model the initiation and non-self-similar growth of delaminations. A single relative displacement-based damage parameter is applied in a softening law to track the damage state of the interface and to prevent the restoration of the cohesive state during unloading. The softening law for mixed-mode delamination propagation can be applied to any mode interaction criterion such as the two-parameter power law or the three-parameter Benzeggagh-Kenane criterion. To demonstrate the accuracy of the predictions and the irreversibility capability of the constitutive law, steady-state delamination growth is simulated for quasistatic loading-unloading cycles of various single mode and mixed-mode delamination test specimens.

  3. LC-PROM: Validation of a patient reported outcomes measure for liver cirrhosis patients.

    PubMed

    Zhang, Ying; Yang, Yuanyuan; Lv, Jing; Zhang, Yanbo

    2016-05-10

    The aim of the study is to develop a specific patient-reported scale of liver cirrhosis according to the Patient Reported Outcome guidelines of the Food and Drug Administration (FDA), and to examine its capacity to fill gaps in this field. A conceptual framework was developed and a preliminary item pool developed through literature review and interviews of 10 patients with liver cirrhosis. With the preliminary items, we performed a pilot survey that included a cognitive test with patients and interviews with experts; the focus was on content and language of the scale. In the item selection stage, seven statistical methods including discrete trends method, discrimination analysis, exploratory factor analysis, Cronbach's α coefficient, correlation coefficient, test-retest reliability, Item-Response Theory were applied to survey data from 200 subjects (150 liver cirrhosis patients and 50 controls). This produced the preliminary Liver Cirrhosis Patient-reported Outcome Measure (LC-PROM). In the next stage, we conducted the survey with 620 subjects (500 patients and 120 controls) to validate reliability, validity and acceptability of this scale. The 55 items and 13 dimensions addressed four domains: physical, psychological, social, and therapeutic. Cronbach's α coefficients were 0.921 for the total scale; the confirmatory factor analysis, t-tests and ANOVA supported scale validity; the model fit index as Root Mean Square Error of Approximation (RMSEA), Root Mean Square Residual (RMR), Normed Fit Index (NFI), Non-Normed Fit Index (NNFI), Comparative Fit Index (CFI) and Incremental Fit Index (IFI) met the criterion generally. The acceptance ratio and response rate indicated good feasibility. This study developed an accurate and stable patient-reported outcome scale of liver cirrhosis, which is able to evaluate clinical effects effectively, is helpful to patients in recognizing their health condition, and contributes to clinical decision making both for patients and physicians. Additionally, the LC-PROM can perform as an ultimate assessment of medical and health care effects and can inform clinical trials of new drugs for liver cirrhosis.

  4. Failure prediction during backward flow forming of Ti6Al4V alloy

    NASA Astrophysics Data System (ADS)

    Singh, Abhishek Kumar; Narasimhan, K.; Singh, Ramesh

    2018-05-01

    The Flow forming process is a tube spinning process where the thickness of a tube is reduced with the help of spinning roller/s by keeping the internal diameter unchanged. A 3-D Finite element model for the flow-formability test has been developed by using Abaqus/explicit software. A coupled damage criterion based on continuum damage mechanics (CDM) has been studied in this research. The damage model is introduced by using FORTRAN based VUMAT subroutine which is developed through a stress integration algorithm. Further, the effect of reduction angle, friction coefficient, and coolant heat transfer coefficient on fracture has been studied. The results show that the formability improves with increase in reduction angle. Both, equivalent plastic strain and damage variable increases from inner to outer surface of flow formed tube.

  5. Digital Materials - Evaluation of the Possibilities of using Selected Hyperelastic Models to Describe Constitutive Relations

    NASA Astrophysics Data System (ADS)

    Mańkowski, J.; Lipnicki, J.

    2017-08-01

    The authors tried to identify the parameters of numerical models of digital materials, which are a kind of composite resulting from the manufacture of the product in 3D printers. With the arrangement of several heads of the printer, the new material can result from mixing of materials with radically different properties, during the process of producing single layer of the product. The new material has properties dependent on the base materials properties and their proportions. Digital materials tensile characteristics are often non-linear and qualify to be described by hyperelastic materials models. The identification was conducted based on the results of tensile tests models, its various degrees coefficients of the polynomials to various degrees coefficients of the polynomials. The Drucker's stability criterion was also examined. Fourteen different materials were analyzed.

  6. The Italian version of the Inventory of Interpersonal Problems Personality Disorders Scales (IIP-47): psychometric properties and clinical usefulness as a screening measure.

    PubMed

    Ubbiali, Alessandro; Chiorri, Carlo; Donati, Deborah

    2011-08-01

    The Inventory of Interpersonal Problems-47 (IIP-47) is a brief and valid self-report measure for screening Personality Disorders (PDs). This study examined internal consistency, factor structure, criterion validity, temporal stability, and operating characteristics of the Italian version of the IIP-47 in two independent samples: PD subjects (n = 120) and nonclinical subjects (n = 475). Alpha coefficients ranged from .70 to .90. Multiple-Group Confirmatory Factor Analyses showed that the five-correlated-factor model reported in literature had the highest measurement invariance across the two groups. Criterion validity was supported by correlations among IIP-47 scale scores and scores on established measures of personality dimensions and pathology. Test-retest indices ranged from .71 to .95. PD subjects scored significantly higher than nonclinical subjects on all IIP-47 scales and cut-off scores for different levels of specificity and sensibility are reported. It is concluded that the psychometric properties of the original IIP-47 were preserved in its Italian version.

  7. Validation of a Portuguese version of the Information Needs in Cardiac Rehabilitation (INCR) scale in Brazil.

    PubMed

    Ghisi, Gabriela Lima de Melo; Dos Santos, Rafaella Zulianello; Bonin, Christiani Batista Decker; Roussenq, Suellen; Grace, Sherry L; Oh, Paul; Benetti, Magnus

    2014-01-01

    To translate, culturally adapt and psychometrically validate the Information Needs in Cardiac Rehabilitation (INCR) tool to Portuguese. The identification of information needs is considered the first step to improve knowledge that ultimately could improve health outcomes. The Portuguese version generated was tested in 300 cardiac rehabilitation patients (CR) (34% women; mean age = 61.3 ± 2.1 years old). Test-retest reliability was assessed using intraclass correlation coefficient (ICC), the internal consistency using Cronbach's alpha, and the criterion validity was assessed with regard to patients' education and duration in CR. All 9 subscales were considered internally consistent (á > 0.7). Significant differences between mean total needs and educational level (p < 0.05) and duration in CR (p = 0.03) supported criterion validity. The overall mean (4.6 ± 0.4), as well as the means of the 9 subscales were high (emergency/safety was the greatest need). The Portuguese INCR was demonstrated to have sufficient reliability, consistency and validity. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Measuring quality of life in low-income, Spanish-speaking Puerto Ricans with type 2 diabetes residing in the mainland U.S.

    PubMed

    Lemon, Stephenie C; Rosal, Milagros C; Welch, Garry

    2011-11-01

    This study assessed the psychometric properties of the Audit of Diabetes-Dependent Quality of Life (ADDQoL) modified for low-income, low-education, Spanish-speaking Puerto Ricans with type 2 diabetes residing in the northeastern United States. Cross-sectional data from 226 patients were analyzed. Scale modifications included simplification of instructions, question wording and response format, and oral administration. Reliability was assessed with Cronbach's alpha coefficient and internal structure by exploratory factor analysis. Criterion validity was assessed using correlation analysis and linear and logistic regression models assessing the association of the ADDQoL with standardized physical health status, mental health status, depression, and comorbidity indices. Two ADDQoL items were dropped. The modified scale had excellent internal consistency and supported the original scale factor structure. Criterion validity results supported the validity of this measure. The modified ADDQoL showed psychometric properties that support its use in low-income, Spanish-speaking Puerto Ricans with type 2 diabetes who reside in mainland U.S.

  9. Identification and analysis of damaged or porous hair.

    PubMed

    Hill, Virginia; Loni, Elvan; Cairns, Thomas; Sommer, Jonathan; Schaffer, Michael

    2014-06-01

    Cosmetic hair treatments have been referred to as 'the pitfall' of hair analysis. However, most cosmetic treatments, when applied to the hair as instructed by the product vendors, do not interfere with analysis, provided such treatments can be identified by the laboratory and the samples analyzed and reported appropriately for the condition of the hair. This paper provides methods for identifying damaged or porous hair samples using digestion rates of hair in dithiothreitol with and without proteinase K, as well as a protein measurement method applied to dithiothreitol-digested samples. Extremely damaged samples may be unsuitable for analysis. Aggressive and extended aqueous washing of hair samples is a proven method for removing or identifying externally derived drug contamination of hair. In addition to this wash procedure, we have developed an alternative wash procedure using 90% ethanol for washing damaged or porous hair. The procedure, like the aqueous wash procedure, requires analysis of the last of five washes to evaluate the effectiveness of the washing procedure. This evaluation, termed the Wash Criterion, is derived from studies of the kinetics of washing of hair samples that have been experimentally contaminated and of hair from drug users. To study decontamination methods, in vitro contaminated drug-negative hair samples were washed by both the aqueous buffer method and a 90% ethanol method. Analysis of cocaine and methamphetamine was by liquid chromatography-tandem mass spectrometry (LC/MS/MS). Porous hair samples from drug users, when washed in 90% ethanol, pass the wash criterion although they may fail the aqueous wash criterion. Those samples that fail both the ethanolic and aqueous wash criterion are not reported as positive for ingestion. Similar ratios of the metabolite amphetamine relative to methamphetamine in the last wash and the hair is an additional criterion for assessing contamination vs. ingestion of methamphetamine. Copyright © 2014 John Wiley & Sons, Ltd.

  10. DSM-IV and DSM-5 Prevalence of Social Anxiety Disorder in a Population Sample of Older People.

    PubMed

    Karlsson, Björn; Sigström, Robert; Östling, Svante; Waern, Margda; Börjesson-Hanson, Anne; Skoog, Ingmar

    2016-12-01

    To examine the prevalence of social anxiety disorders (SAD) with (DSM-IV) and without (DSM-5) the person's own assessment that the fear was unreasonable, in a population sample of older adults. Further, to determine whether clinical and sociodemographic correlates of SAD differ depending on the criteria applied. Cross-sectional. General population in Gothenburg, Sweden. A random population-based sample of 75- and 85-year olds (N = 1200) without dementia. Psychiatric research nurses carried out a semi-structured psychiatric examination including the Comprehensive Psychopathological Rating Scale. DSM-IV SAD was diagnosed with the Mini International Neuropsychiatric Interview. SAD was diagnosed according to DSM-IV and DSM-5 criteria. The 6-month duration criterion in DSM-5 was not applied because of lack of information. Other assessments included the Global Assessment of Functioning (GAF), the Brief Scale for Anxiety (BSA), and the Montgomery Åsberg Depression Rating Scale (MADRS). The 1-month prevalence of SAD was 2.5% (N = 30) when the unreasonable fear criterion was defined in accordance with DSM-IV and 5.1% (N = 61) when the DSM-5 criterion was applied. Clinical correlates (GAF, MADRS, and BSA) were worse in SAD cases identified by either procedure compared with all others, and ratings for those reporting unreasonable fear suggested greater (albeit nonsignificant) overall psychopathology. Shifting the judgment of how reasonable the fear was, from the individual to the clinician, doubled the prevalence of SAD. This indicates that the DSM-5 version might increase prevalence rates of SAD in the general population. Further studies strictly applying all DSM-5 criteria are needed in order to confirm these findings. Copyright © 2016 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  11. Are Centers for Disease Control and Prevention Guidelines for Preexposure Prophylaxis Specific Enough? Formulation of a Personalized HIV Risk Score for Pre-Exposure Prophylaxis Initiation.

    PubMed

    Beymer, Matthew R; Weiss, Robert E; Sugar, Catherine A; Bourque, Linda B; Gee, Gilbert C; Morisky, Donald E; Shu, Suzanne B; Javanbakht, Marjan; Bolan, Robert K

    2017-01-01

    Preexposure prophylaxis (PrEP) has emerged as a human immunodeficiency virus (HIV) prevention tool for populations at highest risk for HIV infection. Current US Centers for Disease Control and Prevention (CDC) guidelines for identifying PrEP candidates may not be specific enough to identify gay, bisexual, and other men who have sex with men (MSM) at the highest risk for HIV infection. We created an HIV risk score for HIV-negative MSM based on Syndemics Theory to develop a more targeted criterion for assessing PrEP candidacy. Behavioral risk assessment and HIV testing data were analyzed for HIV-negative MSM attending the Los Angeles LGBT Center between January 2009 and June 2014 (n = 9481). Syndemics Theory informed the selection of variables for a multivariable Cox proportional hazards model. Estimated coefficients were summed to create an HIV risk score, and model fit was compared between our model and CDC guidelines using the Akaike Information Criterion and Bayesian Information Criterion. Approximately 51% of MSM were above a cutpoint that we chose as an illustrative risk score to qualify for PrEP, identifying 75% of all seroconverting MSM. Our model demonstrated a better overall fit when compared with the CDC guidelines (Akaike Information Criterion Difference = 68) in addition to identifying a greater proportion of HIV infections. Current CDC PrEP guidelines should be expanded to incorporate substance use, partner-level, and other Syndemic variables that have been shown to contribute to HIV acquisition. Deployment of such personalized algorithms may better hone PrEP criteria and allow providers and their patients to make a more informed decision prior to PrEP use.

  12. Rupture model based on non-associated plasticity

    NASA Astrophysics Data System (ADS)

    Pradeau, Adrien; Yoon, Jeong Whan; Thuillier, Sandrine; Lou, Yanshan; Zhang, Shunying

    2018-05-01

    This research work is about modeling the mechanical behavior of metallic sheets of AA6016 up to rupture using non-associated flow rule. Experiments were performed at room temperature in uniaxial tension and simple shear in different directions according to the rolling direction and an additional hydraulic bulge test. The anisotropy of the material is described by a Yld2000-2d yield surface [1], calibrated by stress ratios, and a plastic potential represented by Hill1948 [2], calibrated using Lankford coefficients. That way, the former is able to reproduce the yield stresses in different directions and the latter is able to reproduce the deformations in different directions as well [3], [4]. Indeed, the non-associated flow rule allows for the direction of the plastic flow not to be necessarily normal to the yield surface. Concerning the rupture, the macroscopic ductile fracture criterion DF2014 was used [5]. It indirectly uses the three invariants of the stress tensor by using the three following parameters: the stress triaxiality η, the Lode parameter L and the equivalent plastic strain to fracture ∈f-p . In order to be consistent with the plastic model and to add more flexibility to the p criterion, the equivalent stress σ ¯ and the equivalent strain to fracture ∈f-p have been substituted respectively as Yld2000-2d and Hill1948 in the DF2014 fracture criterion. The parameters for the fracture criterion were obtained by optimization and the fracture locus can be plotted in the (η ,L ,∈-p) space. The damage indicator D is then numerically predicted with respect of average strain values. A good correlation with the experimental results is obtained.

  13. Validity, responsiveness, and minimal clinically important difference of EQ-5D-5L in stroke patients undergoing rehabilitation.

    PubMed

    Chen, Poyu; Lin, Keh-Chung; Liing, Rong-Jiuan; Wu, Ching-Yi; Chen, Chia-Ling; Chang, Ku-Chou

    2016-06-01

    To examine the criterion validity, responsiveness, and minimal clinically important difference (MCID) of the EuroQoL 5-Dimensions Questionnaire (EQ-5D-5L) and visual analog scale (EQ-VAS) in people receiving rehabilitation after stroke. The EQ-5D-5L, along with four criterion measures-the Medical Research Council scales for muscle strength, the Fugl-Meyer assessment, the functional independence measure, and the Stroke Impact Scale-was administered to 65 patients with stroke before and after 3- to 4-week therapy. Criterion validity was estimated using the Spearman correlation coefficient. Responsiveness was analyzed by the effect size, standardized response mean (SRM), and criterion responsiveness. The MCID was determined by anchor-based and distribution-based approaches. The percentage of patients exceeding the MCID was also reported. Concurrent validity of the EQ-Index was better compared with the EQ-VAS. The EQ-Index has better power for predicting the rehabilitation outcome in the activities of daily living than other motor-related outcome measures. The EQ-Index was moderately responsive to change (SRM = 0.63), whereas the EQ-VAS was only mildly responsive to change. The MCID estimation of the EQ-Index (the percentage of patients exceeding the MCID) was 0.10 (33.8 %) and 0.10 (33.8 %) based on the anchor-based and distribution-based approaches, respectively, and the estimation of EQ-VAS was 8.61 (41.5 %) and 10.82 (32.3 %). The EQ-Index has shown reasonable concurrent validity, limited predictive validity, and acceptable responsiveness for detecting the health-related quality of life in stroke patients undergoing rehabilitation, but not for EQ-VAS. Future research considering different recovery stages after stroke is warranted to validate these estimations.

  14. Translating and validating a Training Needs Assessment tool into Greek

    PubMed Central

    Markaki, Adelais; Antonakis, Nikos; Hicks, Carolyn M; Lionis, Christos

    2007-01-01

    Background The translation and cultural adaptation of widely accepted, psychometrically tested tools is regarded as an essential component of effective human resource management in the primary care arena. The Training Needs Assessment (TNA) is a widely used, valid instrument, designed to measure professional development needs of health care professionals, especially in primary health care. This study aims to describe the translation, adaptation and validation of the TNA questionnaire into Greek language and discuss possibilities of its use in primary care settings. Methods A modified version of the English self-administered questionnaire consisting of 30 items was used. Internationally recommended methodology, mandating forward translation, backward translation, reconciliation and pretesting steps, was followed. Tool validation included assessing item internal consistency, using the alpha coefficient of Cronbach. Reproducibility (test – retest reliability) was measured by the kappa correlation coefficient. Criterion validity was calculated for selected parts of the questionnaire by correlating respondents' research experience with relevant research item scores. An exploratory factor analysis highlighted how the items group together, using a Varimax (oblique) rotation and subsequent Cronbach's alpha assessment. Results The psychometric properties of the Greek version of the TNA questionnaire for nursing staff employed in primary care were good. Internal consistency of the instrument was very good, Cronbach's alpha was found to be 0.985 (p < 0.001) and Kappa coefficient for reproducibility was found to be 0.928 (p < 0.0001). Significant positive correlations were found between respondents' current performance levels on each of the research items and amount of research involvement, indicating good criterion validity in the areas tested. Factor analysis revealed seven factors with eigenvalues of > 1.0, KMO (Kaiser-Meyer-Olkin) measure of sampling adequacy = 0.680 and Bartlett's test of sphericity, p < 0.001. Conclusion The translated and adapted Greek version is comparable with the original English instrument in terms of validity and reliability and it is suitable to assess professional development needs of nursing staff in Greek primary care settings. PMID:17474989

  15. Nucleation-controlled microstructures and anomalous eutectic formation in undercooled Co-Sn and Ni-Si eutectic melts

    NASA Astrophysics Data System (ADS)

    Li, Mingjun; Kuribayashi, Kazuhiko

    2003-12-01

    Co-20.5 at. pct Sn and Ni-21.4 at. pct Si eutectic alloys have been levitated and undercooled in an electromagnetic levitator (EML) and then solidified spontaneously at different undercoolings. The original surface and cross-sectional morphologies of these solidified samples consist of separate eutectic colonies regardless of melt undercooling, indicating that microstructures in the free solidification of the eutectic systems are nucleation controlled. Regular lamellae always grow from the periphery of an independent anomalous eutectic grain in each eutectic colony. This typical morphology shows that the basic unit should be a single eutectic colony, when discussing the solidification behavior. Special emphasis is focused on the anomalous eutectic formation after a significant difference in linear kinetic coefficients is recognized for terminal eutectic phases, in particular when a eutectic reaction contains a nonfaceted disordered solid solution and a faceted ordered intermetallic compound as the terminal eutectic phases. It is this remarkable difference in the linear kinetic coefficients that leads to a pronounced difference in kinetic undercoolings. The sluggish kinetics in the interface atomic attachment of the intermetallic compound originates the occurrence of the decoupled growth of two eutectic phases. Hence, the current eutectic models are modified to incorporate kinetic undercooling, in order to account for the competitive growth behavior of eutectic phases in a single eutectic colony. The critical condition for generating the decoupled growth of eutectic phases is proposed. Further analysis reveals that a dimensionless critical undercooling may be appropriate to show the tendency for the anomalous eutectic-forming ability when considering the difference in linear kinetic coefficients of terminal eutectic phases. This qualitative criterion, albeit crude with several approximations and assumptions, can elucidate most of the published experimental results with the correct order of magnitude. Solidification modes in some eutectic alloys are predicted on the basis of the present criterion. Future work that may result in some probable errors is briefly directed to improve the model.

  16. Criterion and Concurrent Validity of the activPAL™ Professional Physical Activity Monitor in Adolescent Females

    PubMed Central

    Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.

    2012-01-01

    Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069

  17. Development and validation of the coronary heart disease scale under the system of quality of life instruments for chronic diseases QLICD-CHD: combinations of classical test theory and Generalizability Theory.

    PubMed

    Wan, Chonghua; Li, Hezhan; Fan, Xuejin; Yang, Ruixue; Pan, Jiahua; Chen, Wenru; Zhao, Rong

    2014-06-04

    Quality of life (QOL) for patients with coronary heart disease (CHD) is now concerned worldwide with the specific instruments being seldom and no one developed by the modular approach. This paper is aimed to develop the CHD scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-CHD) by the modular approach and validate it by both classical test theory and Generalizability Theory. The QLICD-CHD was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, pre-testing and quantitative statistical procedures. 146 inpatients with CHD were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t-tests and also G studies and D studies of Genralizability Theory analysis. Multi-trait scaling analysis, correlation and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. The internal consistency α and test-retest reliability coefficients (Pearson r and Intra-class correlations ICC) for the overall instrument and all domains were higher than 0.70 and 0.80 respectively; The overall and all domains except for social domain had statistically significant changes after treatments with moderate effect size SRM (standardized response mea) ranging from 0.32 to 0.67. G-coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-CHD has good validity, reliability, and moderate responsiveness and some highlights, and can be used as the quality of life instrument for patients with CHD. However, in order to obtain better reliability, the numbers of items for social domain should be increased or the items' quality, not quantity, should be improved.

  18. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    PubMed Central

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua

    2015-01-01

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985

  19. Evaluation of the relevance of the glassy state as stability criterion for freeze-dried bacteria by application of the Arrhenius and WLF model.

    PubMed

    Aschenbrenner, Mathias; Kulozik, Ulrich; Foerst, Petra

    2012-12-01

    The aim of this work was to describe the temperature dependence of microbial inactivation for several storage conditions and protective systems (lactose, trehalose and dextran) in relation to the physical state of the sample, i.e. the glassy or non-glassy state. The resulting inactivation rates k were described by applying two models, Arrhenius and Williams-Landel-Ferry (WLF), in order to evaluate the relevance of diffusional limitation as a protective mechanism. The application of the Arrhenius model revealed a significant decrease in activation energy E(a) for storage conditions close to T(g). This finding is an indication that the protective effect of a surrounding glassy matrix can, at least, partly be ascribed to its inherent restricted diffusion and mobility. The application of the WLF model revealed that the temperature dependence of microbial inactivation above T(g) is significantly weaker than predicted by the universal coefficients. Thus, it can be concluded that microbial inactivation is not directly linked with the mechanical relaxation behavior of the surrounding matrix as it was reported for viscosity and crystallization phenomena in case of disaccharide systems. Copyright © 2012. Published by Elsevier Inc.

  20. A Viscoelastic earthquake simulator with application to the San Francisco Bay region

    USGS Publications Warehouse

    Pollitz, Fred F.

    2009-01-01

    Earthquake simulation on synthetic fault networks carries great potential for characterizing the statistical patterns of earthquake occurrence. I present an earthquake simulator based on elastic dislocation theory. It accounts for the effects of interseismic tectonic loading, static stress steps at the time of earthquakes, and postearthquake stress readjustment through viscoelastic relaxation of the lower crust and mantle. Earthquake rupture initiation and termination are determined with a Coulomb failure stress criterion and the static cascade model. The simulator is applied to interacting multifault systems: one, a synthetic two-fault network, and the other, a fault network representative of the San Francisco Bay region. The faults are discretized both along strike and along dip and can accommodate both strike slip and dip slip. Stress and seismicity functions are evaluated over 30,000 yr trial time periods, resulting in a detailed statistical characterization of the fault systems. Seismicity functions such as the coefficient of variation and a- and b-values exhibit systematic patterns with respect to simple model parameters. This suggests that reliable estimation of the controlling parameters of an earthquake simulator is a prerequisite to the interpretation of its output in terms of seismic hazard.

  1. Molecular classification of pesticides including persistent organic pollutants, phenylurea and sulphonylurea herbicides.

    PubMed

    Torrens, Francisco; Castellano, Gloria

    2014-06-05

    Pesticide residues in wine were analyzed by liquid chromatography-tandem mass spectrometry. Retentions are modelled by structure-property relationships. Bioplastic evolution is an evolutionary perspective conjugating effect of acquired characters and evolutionary indeterminacy-morphological determination-natural selection principles; its application to design co-ordination index barely improves correlations. Fractal dimensions and partition coefficient differentiate pesticides. Classification algorithms are based on information entropy and its production. Pesticides allow a structural classification by nonplanarity, and number of O, S, N and Cl atoms and cycles; different behaviours depend on number of cycles. The novelty of the approach is that the structural parameters are related to retentions. Classification algorithms are based on information entropy. When applying procedures to moderate-sized sets, excessive results appear compatible with data suffering a combinatorial explosion. However, equipartition conjecture selects criterion resulting from classification between hierarchical trees. Information entropy permits classifying compounds agreeing with principal component analyses. Periodic classification shows that pesticides in the same group present similar properties; those also in equal period, maximum resemblance. The advantage of the classification is to predict the retentions for molecules not included in the categorization. Classification extends to phenyl/sulphonylureas and the application will be to predict their retentions.

  2. A cubic map chaos criterion theorem with applications in generalized synchronization based pseudorandom number generator and image encryption.

    PubMed

    Yang, Xiuping; Min, Lequan; Wang, Xue

    2015-05-01

    This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2(1345). As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.

  3. A cubic map chaos criterion theorem with applications in generalized synchronization based pseudorandom number generator and image encryption

    NASA Astrophysics Data System (ADS)

    Yang, Xiuping; Min, Lequan; Wang, Xue

    2015-05-01

    This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 21345. As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.

  4. Forming limit strains for non-linear strain path of AA6014 aluminium sheet deformed at room temperature

    NASA Astrophysics Data System (ADS)

    Bressan, José Divo; Liewald, Mathias; Drotleff, Klaus

    2017-10-01

    Forming limit strain curves of conventional aluminium alloy AA6014 sheets after loading with non-linear strain paths are presented and compared with D-Bressan macroscopic model of sheet metal rupture by critical shear stress criterion. AA6014 exhibits good formability at room temperature and, thus, is mainly employed in car body external parts by manufacturing at room temperature. According to Weber et al., experimental bi-linear strain paths were carried out in specimens with 1mm thickness by pre-stretching in uniaxial and biaxial directions up to 5%, 10% and 20% strain levels before performing Nakajima testing experiments to obtain the forming limit strain curves, FLCs. In addition, FLCs of AA6014 were predicted by employing D-Bressan critical shear stress criterion for bi-linear strain path and comparisons with the experimental FLCs were analyzed and discussed. In order to obtain the material coefficients of plastic anisotropy, strain and strain rate hardening behavior and calibrate the D-Bressan model, tensile tests, two different strain rate on specimens cut at 0°, 45° and 90° to the rolling direction and also bulge test were carried out at room temperature. The correlation of experimental bi-linear strain path FLCs is reasonably good with the predicted limit strains from D-Bressan model, assuming equivalent pre-strain calculated by Hill 1979 yield criterion.

  5. Reliability and criterion-related validity of a new repeated agility test

    PubMed Central

    Makni, E; Jemni, M; Elloumi, M; Chamari, K; Nabli, MA; Padulo, J; Moalla, W

    2016-01-01

    The study aimed to assess the reliability and the criterion-related validity of a new repeated sprint T-test (RSTT) that includes intense multidirectional intermittent efforts. The RSTT consisted of 7 maximal repeated executions of the agility T-test with 25 s of passive recovery rest in between. Forty-five team sports players performed two RSTTs separated by 3 days to assess the reliability of best time (BT) and total time (TT) of the RSTT. The intra-class correlation coefficient analysis revealed a high relative reliability between test and retest for BT and TT (>0.90). The standard error of measurement (<0.50) showed that the RSTT has a good absolute reliability. The minimal detectable change values for BT and TT related to the RSTT were 0.09 s and 0.58 s, respectively. To check the criterion-related validity of the RSTT, players performed a repeated linear sprint (RLS) and a repeated sprint with changes of direction (RSCD). Significant correlations between the BT and TT of the RLS, RSCD and RSTT were observed (p<0.001). The RSTT is, therefore, a reliable and valid measure of the intermittent repeated sprint agility performance. As this ability is required in all team sports, it is suggested that team sports coaches, fitness coaches and sports scientists consider this test in their training follow-up. PMID:27274109

  6. A Generic Nonlinear Aerodynamic Model for Aircraft

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2014-01-01

    A generic model of the aerodynamic coefficients was developed using wind tunnel databases for eight different aircraft and multivariate orthogonal functions. For each database and each coefficient, models were determined using polynomials expanded about the state and control variables, and an othgonalization procedure. A predicted squared-error criterion was used to automatically select the model terms. Modeling terms picked in at least half of the analyses, which totalled 45 terms, were retained to form the generic nonlinear aerodynamic (GNA) model. Least squares was then used to estimate the model parameters and associated uncertainty that best fit the GNA model to each database. Nonlinear flight simulations were used to demonstrate that the GNA model produces accurate trim solutions, local behavior (modal frequencies and damping ratios), and global dynamic behavior (91% accurate state histories and 80% accurate aerodynamic coefficient histories) under large-amplitude excitation. This compact aerodynamics model can be used to decrease on-board memory storage requirements, quickly change conceptual aircraft models, provide smooth analytical functions for control and optimization applications, and facilitate real-time parametric system identification.

  7. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.

    2001-01-01

    We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.

  8. Simplified mathematical model of losses in a centrifugal compressor stage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seleznev, K.P.; Galerkin, Yu.B.; Popova, E.Yu.

    1988-05-01

    A mathematical model was developed for optimizing the parameters of the stage which does not require calculation of the flow around grids. The loss coefficients of the stage elements were considered as functions of the flow-through section, the angle of incidence, the compressibility criterion, and the Reynolds number. The relationships were used to calculate losses in all blade components, including blade diffusers, deflectors, and rotors. The model is implemented in a microcomputer and will compute the efficiency of one variant of the flow-through section of a stage in 60 minutes.

  9. A methodology based on reduced complexity algorithm for system applications using microprocessors

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1988-01-01

    The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.

  10. Reasons of Teachers for Applying for Graduate Programs and Their Expectations from Programs

    ERIC Educational Resources Information Center

    Burgaz, Berrin; Kocak, Seval

    2015-01-01

    This study aims to find out teachers' motivation for applying for graduate programs and to explore their expectations from the programs and their ideas regarding the necessity of such programs for teachers. The paper is based on a qualitative research method and draws its data from focus group interviews. The study used the criterion sampling…

  11. English Language Assessment in the Colleges of Applied Sciences in Oman: Thematic Document Analysis

    ERIC Educational Resources Information Center

    Al Hajri, Fatma

    2014-01-01

    Proficiency in English language and how it is measured have become central issues in higher education research as the English language is increasingly used as a medium of instruction and a criterion for admission to education. This study evaluated the English language assessment in the foundation Programme at the Colleges of Applied sciences in…

  12. Competency Test Items for Applied Principles of Agribusiness and Natural Resources Occupations. Ornamental Horticulture Component. A Report of Research.

    ERIC Educational Resources Information Center

    Cheek, Jimmy G.; McGhee, Max B.

    The central purpose of this study was to develop and field test written criterion-referenced tests for the ornamental horticulture component of applied principles of agribusiness and natural resources occupations programs. The test items were to be used by secondary agricultural education students in Florida. Based upon the objectives identified…

  13. Accounting for uncertainty in health economic decision models by using model averaging

    PubMed Central

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-01-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329

  14. Vehicle lift-off modelling and a new rollover detection criterion

    NASA Astrophysics Data System (ADS)

    Mashadi, Behrooz; Mostaghimi, Hamid

    2017-05-01

    The modelling and development of a general criterion for the prediction of rollover threshold is the main purpose of this work. Vehicle dynamics models after the wheels lift-off and when the vehicle moves on the two wheels are derived and the governing equations are used to develop the rollover threshold. These models include the properties of the suspension and steering systems. In order to study the stability of motion, the steady-state solutions of the equations of motion are carried out. Based on the stability analyses, a new relation is obtained for the rollover threshold in terms of measurable response parameters. The presented criterion predicts the best time for the prevention of the vehicle rollover by applying a correcting moment. It is shown that the introduced threshold of vehicle rollover is a proper state of vehicle motion that is best for stabilising the vehicle with a low energy requirement.

  15. Two-component gravitational instability in spiral galaxies

    NASA Astrophysics Data System (ADS)

    Marchuk, A. A.; Sotnikova, N. Y.

    2018-04-01

    We applied a criterion of gravitational instability, valid for two-component and infinitesimally thin discs, to observational data along the major axis for seven spiral galaxies of early types. Unlike most papers, the dispersion equation corresponding to the criterion was solved directly without using any approximation. The velocity dispersion of stars in the radial direction σR was limited by the range of possible values instead of a fixed value. For all galaxies, the outer regions of the disc were analysed up to R ≤ 130 arcsec. The maximal and sub-maximal disc models were used to translate surface brightness into surface density. The largest destabilizing disturbance stars can exert on a gaseous disc was estimated. It was shown that the two-component criterion differs a little from the one-fluid criterion for galaxies with a large surface gas density, but it allows to explain large-scale star formation in those regions where the gaseous disc is stable. In the galaxy NGC 1167 star formation is entirely driven by the self-gravity of the stars. A comparison is made with the conventional approximations which also include the thickness effect and with models for different sound speed cg. It is shown that values of the effective Toomre parameter correspond to the instability criterion of a two-component disc Qeff < 1.5-2.5. This result is consistent with previous theoretical and observational studies.

  16. Repeated dose 90-day oral toxicity test of G-7% NANA in rats: An application of new criterion for toxicity determination to test article-induced changes.

    PubMed

    Heo, Hye Seon; An, MinJi; Lee, Ji Sun; Kim, Hee Kyong; Park, Yeong-Chul

    2018-06-01

    G-7% NANA is N-acetylneuraminic acid(NANA) containing 7% sialic acid isolated from glycomacropeptide (GMP), a compound of milk. Since NANA is likely to have immunotoxicity, the need to ensure safety for long-term administration has been raised. In this study, a 90-day repeated oral dose toxicity test was performed in rats using G-7% NANA in the dosages of 0, 1250, 2500 and 5000 mg/kg/day.A toxicity determination criterion based on the significant change caused by the administration of the substancewas developed for estimating NOEL, NOAEL and LOAELapplied to this study. When analyzing the immunological markers, no significant changes were observed, even if other significant changes were observed in the high dose group. In accordance with the toxicity determination criterion developed, the NOEL in male and female has been determined as 2500 mg/kg/day, and the NOAEL in females has been determined as 5000 mg/kg/day. The toxicity determination criterion, applied for the first time in the repeated dose toxicity tests, could provide a basis for distinguishing NOEL and NOAEL more clearly; nevertheless, the toxicity determination criterion needs to be supplemented by adding differentiating adverse effects and non-adverse effects based on more experiences of the repeated dose toxicity tests. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Genetic parameters for growth characteristics of free-range chickens under univariate random regression models.

    PubMed

    Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B

    2016-09-01

    Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that genetic gain for body weight can be achieved by selection. Also, selection for body weight at 42 days of age can be maintained as a selection criterion. © 2016 Poultry Science Association Inc.

  18. English Cross-Cultural Translation and Validation of the Neuromuscular Score: A System for Motor Function Classification in Patients With Neuromuscular Diseases

    PubMed Central

    Vuillerot, Carole; Meilleur, Katherine G.; Jain, Minal; Waite, Melissa; Wu, Tianxia; Linton, Melody; Datsgir, Jahannaz; Donkervoort, Sandra; Leach, Meganne E.; Rutkowski, Anne; Rippert, Pascal; Payan, Christine; Iwaz, Jean; Hamroun, Dalil; Bérard, Carole; Poirot, Isabelle; Bönnemann, Carsten G.

    2016-01-01

    Objective To develop and validate an English version of the Neuromuscular (NM)-Score, a classification for patients with NM diseases in each of the 3 motor function domains: D1, standing and transfers; D2, axial and proximal motor function; and D3, distal motor function. Design Validation survey. Setting Patients seen at a medical research center between June and September 2013. Participants Consecutive patients (N = 42) aged 5 to 19 years with a confirmed or suspected diagnosis of congenital muscular dystrophy. Interventions Not applicable. Main Outcome Measures An English version of the NM-Score was developed by a 9-person expert panel that assessed its content validity and semantic equivalence. Its concurrent validity was tested against criterion standards (Brooke Scale, Motor Function Measure [MFM], activity limitations for patients with upper and/or lower limb impairments [ACTIVLIM], Jebsen Test, and myometry measurements). Informant agreement between patient/caregiver (P/C)-reported and medical doctor (MD)-reported NM scores was measured by weighted kappa. Results Significant correlation coefficients were found between NM scores and criterion standards. The highest correlations were found between NM-score D1 and MFM score D1 (ρ = −.944, P<.0001), ACTIVLIM (ρ = −.895, P<.0001), and hip abduction strength by myometry (ρ = −.811, P<.0001). Informant agreement between P/C-reported and MD-reported NM scores was high for D1 (κ = .801; 95% confidence interval [CI], .701–.914) but moderate for D2 (κ = .592; 95% CI, .412–.773) and D3 (κ = .485; 95% CI, .290–.680). Correlation coefficients between the NM scores and the criterion standards did not significantly differ between P/C-reported and MD-reported NM scores. Conclusions Patients and physicians completed the English NM-Score easily and accurately. The English version is a reliable and valid instrument that can be used in clinical practice and research to describe the functional abilities of patients with NM diseases. PMID:24862765

  19. Severity of illness index for surgical departments in a Cuban hospital: a revalidation study.

    PubMed

    Armas-Bencomo, Amadys; Tamargo-Barbeito, Teddy Osmin; Fuentes-Valdés, Edelberto; Jiménez-Paneque, Rosa Eugenia

    2017-03-08

    In the context of the evaluation of hospital services, the incorporation of severity indices allows an essential control variable for performance comparisons in time and space through risk adjustment. The severity index for surgical services was developed in 1999 and validated as a general index for surgical services. Sixteen years later the hospital context is different in many ways and a revalidation was considered necessary to guarantee its current usefulness. To evaluate the validity and reliability of the surgical services severity index to warrant its reasonable use under current conditions. A descriptive study was carried out in the General Surgery service of the "Hermanos Ameijeiras" Clinical Surgical Hospital of Havana, Cuba during the second half of 2010. We reviewed the medical records of 511 patients discharged from this service. Items were the same as the original index as were their weighted values. Conceptual or construct validity, criterion validity and inter-rater reliability as well as internal consistency of the proposed index were evaluated. Construct validity was expressed as a significant association between the value of the severity index for surgical services and discharge status. A significant association was also found, although weak, with length of hospital stay. Criterion validity was demonstrated through the correlations between the severity index for surgical services and other similar indices. Regarding criterion validity, the Horn index showed a correlation of 0.722 (95% CI: 0.677-0.761) with our index. With the POSSUM score, correlation was 0.454 (95% CI: 0.388-0.514) with mortality risk and 0.539 (95% CI: 0.462-0.607) with morbidity risk. Internal consistency yielded a standardized Cronbach's alpha of 0.8; inter-rater reliability resulted in a reliability coefficient of 0.98 for the quantitative index and a weighted global Kappa coefficient of 0.87 for the ordinal surgical index of severity for surgical services (IGQ). The validity and reliability of the proposed index was satisfactory in all aspects evaluated. The surgical services severity index may be used in the original context and is easily adaptable to other contexts as well.

  20. Challenge of assessing symptoms in seriously ill intensive care unit patients: can proxy reporters help?

    PubMed

    Puntillo, Kathleen A; Neuhaus, John; Arai, Shoshana; Paul, Steven M; Gropper, Michael A; Cohen, Neal H; Miaskowski, Christine

    2012-10-01

    Determine levels of agreement among intensive care unit patients and their family members, nurses, and physicians (proxies) regarding patients' symptoms and compare levels of mean intensity (i.e., the magnitude of a symptom sensation) and distress (i.e., the degree of emotionality that a symptom engenders) of symptoms among patients and proxy reporters. Prospective study of proxy reporters of symptoms in seriously ill patients. Two intensive care units in a tertiary medical center in the Western United States. Two hundred and forty-five intensive care unit patients, 243 family members, 103 nurses, and 92 physicians. None. On the basis of the magnitude of intraclass correlation coefficients, where coefficients from .35 to .78 are considered to be appropriately robust, correlation coefficients between patients' and family members' ratings met this criterion (≥.35) for intensity in six of ten symptoms. No intensity ratings between patients and nurses had intraclass correlation coefficients >.32. Three symptoms had intensity correlation coefficients of ≥.36 between patients' and physicians' ratings. Correlation coefficients between patients and family members were >.40 for five symptom-distress ratings. No symptoms had distress correlation coefficients of ≥.28 between patients' and nurses' ratings. Two symptoms had symptom-distress correlation coefficients between patients' and physicians' ratings at >.39. Family members, nurses, and physicians reported higher symptom-intensity scores than patients did for 80%, 60%, and 60% of the symptoms, respectively. Family members, nurses, and physicians reported higher symptom-distress scores than patients did for 90%, 70%, and 80% of the symptoms, respectively. Patient-family intraclass correlation coefficients were sufficiently close for us to consider using family members to help assess intensive care unit patients' symptoms. Relatively low intraclass correlation coefficients between intensive care unit clinicians' and patients' symptom ratings indicate that some proxy raters overestimate whereas others underestimate patients' symptoms. Proxy overestimation of patients' symptom scores warrants further study because this may influence decisions about treating patients' symptoms.

  1. Comparison of bioactive chemical space networks generated using substructure- and fingerprint-based measures of molecular similarity

    NASA Astrophysics Data System (ADS)

    Zhang, Bijun; Vogt, Martin; Maggiora, Gerald M.; Bajorath, Jürgen

    2015-07-01

    Chemical space networks (CSNs) have recently been introduced as a conceptual alternative to coordinate-based representations of chemical space. CSNs were initially designed as threshold networks using the Tanimoto coefficient as a continuous similarity measure. The analysis of CSNs generated from sets of bioactive compounds revealed that many statistical properties were strongly dependent on their edge density. While it was difficult to compare CSNs at pre-defined similarity threshold values, CSNs with constant edge density were directly comparable. In the current study, alternative CSN representations were constructed by applying the matched molecular pair (MMP) formalism as a substructure-based similarity criterion. For more than 150 compound activity classes, MMP-based CSNs (MMP-CSNs) were compared to corresponding threshold CSNs (THR-CSNs) at a constant edge density by applying different parameters from network science, measures of community structure distributions, and indicators of structure-activity relationship (SAR) information content. MMP-CSNs were found to be an attractive alternative to THR-CSNs, yielding low edge densities and well-resolved topologies. MMP-CSNs and corresponding THR-CSNs often had similar topology and closely corresponding community structures, although there was only limited overlap in similarity relationships. The homophily principle from network science was shown to affect MMP-CSNs and THR-CSNs in different ways, despite the presence of conserved topological features. Moreover, activity cliff distributions in alternative CSN designs markedly differed, which has important implications for SAR analysis.

  2. Suppression of slip and rupture velocity increased by thermal pressurization: Effect of dilatancy

    NASA Astrophysics Data System (ADS)

    Urata, Yumi; Kuge, Keiko; Kase, Yuko

    2013-11-01

    investigated the effect of dilatancy on dynamic rupture propagation on a fault where thermal pressurization (TP) is in effect, taking into account permeability varying with porosity; the study is based on three-dimensional (3-D) numerical simulations of spontaneous ruptures obeying a slip-weakening friction law and Coulomb failure criterion. The effects of dilatancy on dynamic ruptures interacting with TP have been often investigated in one- or two-dimensional numerical simulations. The sole 3-D numerical simulation gave attention only to the behavior at a single point on a fault. Moreover, with the sole exception based on a single-degree-freedom spring-slider model, the previous simulations including dilatancy and TP have not considered changes in hydraulic diffusivity. However, the hydraulic diffusivity, which strongly affects TP, can vary as a power of porosity. In this study, we apply a power law relationship between permeability and porosity. We consider both reversible and irreversible changes in porosity, assuming that the irreversible change is proportional to the slip rate and dilatancy coefficient ɛ. Our numerical simulations suggest that the effects of dilatancy can suppress slip and rupture velocity increased by TP. The results reveal that the amount of slip on the fault decreases with increasing ɛ or exponent of the power law, and the rupture velocity is predominantly suppressed by ɛ. This was observed regardless of whether the applied stresses were high or low. The deficit of the final slip in relation to ɛ can be smaller as the fault size is larger.

  3. Bayesian meta-analysis of Cronbach's coefficient alpha to evaluate informative hypotheses.

    PubMed

    Okada, Kensuke

    2015-12-01

    This paper proposes a new method to evaluate informative hypotheses for meta-analysis of Cronbach's coefficient alpha using a Bayesian approach. The coefficient alpha is one of the most widely used reliability indices. In meta-analyses of reliability, researchers typically form specific informative hypotheses beforehand, such as 'alpha of this test is greater than 0.8' or 'alpha of one form of a test is greater than the others.' The proposed method enables direct evaluation of these informative hypotheses. To this end, a Bayes factor is calculated to evaluate the informative hypothesis against its complement. It allows researchers to summarize the evidence provided by previous studies in favor of their informative hypothesis. The proposed approach can be seen as a natural extension of the Bayesian meta-analysis of coefficient alpha recently proposed in this journal (Brannick and Zhang, 2013). The proposed method is illustrated through two meta-analyses of real data that evaluate different kinds of informative hypotheses on superpopulation: one is that alpha of a particular test is above the criterion value, and the other is that alphas among different test versions have ordered relationships. Informative hypotheses are supported from the data in both cases, suggesting that the proposed approach is promising for application. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Evaluating the use of diversity indices to distinguish between microbial communities with different traits.

    PubMed

    Feranchuk, Sergey; Belkova, Natalia; Potapova, Ulyana; Kuzmin, Dmitry; Belikov, Sergei

    2018-05-23

    Several measures of biodiversity are commonly used to describe microbial communities, analyzed using 16S gene sequencing. A wide range of available experiments on 16S gene sequencing allows us to present a framework for a comparison of various diversity indices. The criterion for the comparison is the statistical significance of the difference in index values for microbial communities with different traits, within the same experiment. The results of the evaluation indicate that Shannon diversity is the most effective measure among the commonly used diversity indices. The results also indicate that, within the present framework, the Gini coefficient as a diversity index is comparable to Shannon diversity, despite the fact that the Gini coefficient, as a diversity estimator, is far less popular in microbiology than several other measures. Copyright © 2018 Institut Pasteur. Published by Elsevier Masson SAS. All rights reserved.

  5. Surface structure determines dynamic wetting.

    PubMed

    Wang, Jiayu; Do-Quang, Minh; Cannon, James J; Yue, Feng; Suzuki, Yuji; Amberg, Gustav; Shiomi, Junichiro

    2015-02-16

    Liquid wetting of a surface is omnipresent in nature and the advance of micro-fabrication and assembly techniques in recent years offers increasing ability to control this phenomenon. Here, we identify how surface roughness influences the initial dynamic spreading of a partially wetting droplet by studying the spreading on a solid substrate patterned with microstructures just a few micrometers in size. We reveal that the roughness influence can be quantified in terms of a line friction coefficient for the energy dissipation rate at the contact line, and that this can be described in a simple formula in terms of the geometrical parameters of the roughness and the line-friction coefficient of the planar surface. We further identify a criterion to predict if the spreading will be controlled by this surface roughness or by liquid inertia. Our results point to the possibility of selectively controlling the wetting behavior by engineering the surface structure.

  6. FEM study of recrystallized tungsten under ELM-like heat loads

    NASA Astrophysics Data System (ADS)

    Du, J.; Yuan, Y.; Wirtz, M.; Linke, J.; Liu, W.; Greuner, H.

    2015-08-01

    FEM thermal analysis has been performed on rolled tungsten plate loaded with heat load of 23 MW/m2 for 1.5 s. Gradient temperature field is generated due to the Gaussian shape beam profile. Recrystallization and grain growth of various scales were found at different areas of the sample depending on the localized thermal field. FEM thermal-mechanical analyses have been performed on the recrystallized tungsten exposed to ELMs-like heat loads. The analyzed load conditions were 0.38 and 1.14 GW/m2 with different base temperatures. Material deterioration due to recrystallization was implemented by adopting decreased yield stress, tangent modulus, strength coefficient and ductility coefficients. Life time predicted by adopting strain life criterion indicates grain growth from 5 μm to 100 μm causes the life decrease of 80%. This result is gained by pure mathematical calculation based on the empiric assumptions of material properties.

  7. System level analysis and control of manufacturing process variation

    DOEpatents

    Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.

    2005-05-31

    A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.

  8. Thermophysical properties of liquid Ni around the melting temperature from molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozas, R. E.; Department of Physics, University of Bío-Bío, Av. Collao 1202, P.O. Box 5C, Concepción; Demiraǧ, A. D.

    Thermophysical properties of liquid nickel (Ni) around the melting temperature are investigated by means of classical molecular dynamics (MD) simulation, using three different embedded atom method potentials to model the interactions between the Ni atoms. Melting temperature, enthalpy, static structure factor, self-diffusion coefficient, shear viscosity, and thermal diffusivity are compared to recent experimental results. Using ab initio MD simulation, we also determine the static structure factor and the mean-squared displacement at the experimental melting point. For most of the properties, excellent agreement is found between experiment and simulation, provided the comparison relative to the corresponding melting temperature. We discuss themore » validity of the Hansen-Verlet criterion for the static structure factor as well as the Stokes-Einstein relation between self-diffusion coefficient and shear viscosity. The thermal diffusivity is extracted from the autocorrelation function of a wavenumber-dependent temperature fluctuation variable.« less

  9. A new responder criterion (relative effect per patient (REPP) > 0.2) externally validated in a large total hip replacement multicenter cohort (EUROHIP).

    PubMed

    Huber, J; Hüsler, J; Dieppe, P; Günther, K P; Dreinhöfer, K; Judge, A

    2016-03-01

    To validate a new method to identify responders (relative effect per patient (REPP) >0.2) using the OMERACT-OARSI criteria as gold standard in a large multicentre sample. The REPP ([score before - after treatment]/score before treatment) was calculated for 845 patients of a large multicenter European cohort study for THR. The patients with a REPP >0.2 were defined as responders. The responder rate was compared to the gold standard (OMERACT-OARSI criteria) using receiver operator characteristic (ROC) curve analysis for sensitivity, specificity and percentage of appropriately classified patients. With the criterion REPP>0.2 85.4% of the patients were classified as responders, applying the OARSI-OMERACT criteria 85.7%. The new method had 98.8% sensitivity, 94.2% specificity and 98.1% of the patients were correctly classified compared to the gold standard. The external validation showed a high sensitivity and also specificity of a new criterion to identify a responder compared to the gold standard method. It is simple and has no uncertainties due to a single classification criterion. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. [Suspecies criterion in ectoparasites (based on the example of bird lice)].

    PubMed

    Eichler, W

    1977-01-01

    It has been proved on Mallophaga that the notion "hostal subspecies" can be applied at the level of intraspecies categories to constant parasites possessing, distinct specificity. The notion should be applied in those cases when there are but small differences between groups of parasites from different hosts (e. g. in sizes) and when hosts belong to different species of the same genus.

  11. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  12. Psychometric properties of the Spanish version of the Mindful Attention Awareness Scale (MAAS) in patients with fibromyalgia.

    PubMed

    Cebolla, Ausias; Luciano, Juan V; DeMarzo, Marcelo Piva; Navarro-Gil, Mayte; Campayo, Javier Garcia

    2013-01-14

    Mindful-based interventions improve functioning and quality of life in fibromyalgia (FM) patients. The aim of the study is to perform a psychometric analysis of the Spanish version of the Mindful Attention Awareness Scale (MAAS) in a sample of patients diagnosed with FM. The following measures were administered to 251 Spanish patients with FM: the Spanish version of MAAS, the Chronic Pain Acceptance Questionnaire, the Pain Catastrophising Scale, the Injustice Experience Questionnaire, the Psychological Inflexibility in Pain Scale, the Fibromyalgia Impact Questionnaire and the Euroqol. Factorial structure was analysed using Confirmatory Factor Analyses (CFA). Cronbach's α coefficient was calculated to examine internal consistency, and the intraclass correlation coefficient (ICC) was calculated to assess the test-retest reliability of the measures. Pearson's correlation tests were run to evaluate univariate relationships between scores on the MAAS and criterion variables. The MAAS scores in our sample were low (M = 56.7; SD = 17.5). CFA confirmed a two-factor structure, with the following fit indices [sbX2 = 172.34 (p < 0.001), CFI = 0.95, GFI = 0.90, SRMR = 0.05, RMSEA = 0.06. MAAS was found to have high internal consistency (Cronbach's α = 0.90) and adequate test-retest reliability at a 1-2 week interval (ICC = 0.90). It showed significant and expected correlations with the criterion measures with the exception of the Euroqol (Pearson = 0.15). Psychometric properties of the Spanish version of the MAAS in patients with FM are adequate. The dimensionality of the MAAS found in this sample and directions for future research are discussed.

  13. Transcultural adaptation and initial validation of Brazilian-Portuguese version of the Basel assessment of adherence to immunosuppressive medications scale (BAASIS) in kidney transplants

    PubMed Central

    2013-01-01

    Background Transplant recipients are expected to adhere to a lifelong immunosuppressant therapeutic regimen. However, nonadherence to treatment is an underestimated problem for which no properly validated measurement tool is available for Portuguese-speaking patients. We aimed to initially validate the Basel Assessment of Adherence to Immunosuppressive Medications Scale (BAASIS®) to accurately estimate immunosuppressant nonadherence in Brazilian transplant patients. Methods The BAASIS® (English version) was transculturally adapted and its psychometric properties were assessed. The transcultural adaptation was performed using the Guillemin protocol. Psychometric testing included reliability (intraobserver and interobserver reproducibility, agreement, Kappa coefficient, and the Cronbach’s alpha) and validity (content, criterion, and construct validities). Results The final version of the transculturally adapted BAASIS® was pretested, and no difficulties in understanding its content were found. The intraobserver and interobserver reproducibility variances (0.007 and 0.003, respectively), the Cronbach’s alpha (0.7), Kappa coefficient (0.88) and the agreement (95.2%) suggest accuracy, preciseness and reliability. For construct validity, exploratory factorial analysis demonstrated unidimensionality of the first three questions (r = 0.76, r = 0.80, and r = 0.68). For criterion validity, the adapted BAASIS® was correlated with another self-report instrument, the Measure of Adherence to Treatment, and showed good congruence (r = 0.65). Conclusions The BAASIS® has adequate psychometric properties and may be employed in advance to measure adherence to posttransplant immunosuppressant treatments. This instrument will be the first one validated to use in this specific transplant population and in the Portuguese language. PMID:23692889

  14. Assessment of radiation-induced xerostomia: validation of the Italian version of the xerostomia questionnaire in head and neck cancer patients.

    PubMed

    Pellegrino, Federica; Groff, Elena; Bastiani, Luca; Fattori, Bruno; Sotti, Guido

    2015-04-01

    Xerostomia is the most common acute and late side effect of radiation treatment for head and neck cancer. Affecting taste perception, chewing, swallowing and speech, xerostomia is also the major cause of decreased quality of life. The aims of this study were to validate the Italian translation of the self-reported eight-item xerostomia questionnaire (XQ) and determine its psychometric properties in patients treated with radiotherapy for head and neck cancer. An observational cross-sectional study was conducted in the Radiotherapy Unit of the Veneto Institute of Oncology - IOV in Padua. The XQ was translated according to international guidelines and filled out by 102 patients. Construct validity was assessed using principal component analysis, internal consistency using Cronbach's α coefficient and test-retest reliability at 1-month interval using the intraclass correlation coefficient (ICC). Criterion-related validity was evaluated to compare the Italian version of XQ with the European Organization for Research and Treatment of Cancer (EORTC) Core Quality-of-Life Questionnaire (QLQ-C30) and its Head and Neck Cancer Module (QLQ-H&N35). Cronbach's α for the Italian version of XQ was strong at α = 0.93, test-retest reliability was also strong (0.79) and factor analysis confirmed that the questionnaire was one-dimensional. Criterion-related validity was excellent with high association with the EORTC QLQ-H&N35 xerostomia and sticky saliva scales. The Italian version of XQ has excellent psychometric properties and can be used to evaluate the impact of emerging radiation delivery techniques aiming at preventing xerostomia.

  15. Comparison and verification of two models which predict minimum principal in situ stress from triaxial data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harikrishnan, R.; Hareland, G.; Warpinski, N.R.

    This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives themore » minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.« less

  16. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  17. Psychometric properties of the modified RESIDE physical activity questionnaire among low-income overweight women.

    PubMed

    Jones, Sydney A; Evenson, Kelly R; Johnston, Larry F; Trost, Stewart G; Samuel-Hodge, Carmen; Jewell, David A; Kraschnewski, Jennifer L; Keyserling, Thomas C

    2015-01-01

    This study explored the criterion-related validity and test-retest reliability of the modified RESIDential Environment physical activity questionnaire and whether the instrument's validity varied by body mass index, education, race/ethnicity, or employment status. Validation study using baseline data collected for randomized trial of a weight loss intervention. Participants recruited from health departments wore an ActiGraph accelerometer and self-reported non-occupational walking, moderate and vigorous physical activity on the modified RESIDential Environment questionnaire. We assessed validity (n=152) using Spearman correlation coefficients, and reliability (n=57) using intraclass correlation coefficients. When compared to steps, moderate physical activity, and bouts of moderate/vigorous physical activity measured by accelerometer, these questionnaire measures showed fair evidence for validity: recreational walking (Spearman correlation coefficients 0.23-0.36), total walking (Spearman correlation coefficients 0.24-0.37), and total moderate physical activity (Spearman correlation coefficients 0.18-0.36). Correlations for self-reported walking and moderate physical activity were higher among unemployed participants and women with lower body mass indices. Generally no other variability in the validity of the instrument was found. Evidence for reliability of RESIDential Environment measures of recreational walking, total walking, and total moderate physical activity was substantial (intraclass correlation coefficients 0.56-0.68). Evidence for questionnaire validity and reliability varied by activity domain and was strongest for walking measures. The questionnaire may capture physical activity less accurately among women with higher body mass indices and employed participants. Capturing occupational activity, specifically walking at work, may improve questionnaire validity. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  18. Identifying dyspepsia in the Greek population: translation and validation of a questionnaire

    PubMed Central

    Anastasiou, Foteini; Antonakis, Nikos; Chaireti, Georgia; Theodorakis, Pavlos N; Lionis, Christos

    2006-01-01

    Background Studies on clinical issues, including diagnostic strategies, are considered to be the core content of general practice research. The use of standardised instruments is regarded as an important component for the development of Primary Health Care research capacity. Demand for epidemiological cross-cultural comparisons in the international setting and the use of common instruments and definitions valid to each culture is bigger than ever. Dyspepsia is a common complaint in primary practice but little is known with respect to its incidence in Greece. There are some references about the Helicobacter Pylori infection in patients with functional dyspepsia or gastric ulcer in Greece but there is no specific instrument for the identification of dyspepsia. This paper reports on the validation and translation into Greek, of an English questionnaire for the identification of dyspepsia in the general population and discusses several possibilities of its use in the Greek primary care. Methods The selected English postal questionnaire for the identification of people with dyspepsia in the general population consists of 30 items and was developed in 1995. The translation and cultural adaptation of the questionnaire has been performed according to international standards. For the validation of the instrument the internal consistency of the items was established using the alpha coefficient of Chronbach, the reproducibility (test – retest reliability) was measured by kappa correlation coefficient and the criterion validity was calculated against the diagnosis of the patients' records using also kappa correlation coefficient. Results The final Greek version of the postal questionnaire for the identification of dyspepsia in the general population was reliably translated. The internal consistency of the questionnaire was good, Chronbach's alpha was found to be 0.88 (95% CI: 0.81–0.93), suggesting that all items were appropriate to measure. Kappa coefficient for reproducibility (test – retest reliability) was found 0.66 (95% CI: 0.62–0.71), whereas the kappa analysis for criterion validity was 0.63 (95% CI: 0.36–0.89). Conclusion This study indicates that the Greek translation is comparable with the English-language version in terms of validity and reliability, and is suitable for epidemiological research within the Greek primary health care setting. PMID:16515708

  19. Cyclic plasticity models and application in fatigue analysis

    NASA Technical Reports Server (NTRS)

    Kalev, I.

    1981-01-01

    An analytical procedure for prediction of the cyclic plasticity effects on both the structural fatigue life to crack initiation and the rate of crack growth is presented. The crack initiation criterion is based on the Coffin-Manson formulae extended for multiaxial stress state and for inclusion of the mean stress effect. This criterion is also applied for the accumulated damage ahead of the existing crack tip which is assumed to be related to the crack growth rate. Three cyclic plasticity models, based on the concept of combination of several yield surfaces, are employed for computing the crack growth rate of a crack plane stress panel under several cyclic loading conditions.

  20. [A new method of investigation of "child's" behavior (infant-mother attachment) of newborn rats].

    PubMed

    Stovolosov, I S; Dubynin, V A; Kamenskiĭ, A A

    2010-01-01

    A new method of studying "child's" (maternal bonding) behavior of newborn rats was developed. The efficiency of the method was proved in estimation of dopaminergic control of the infant-mother attachment. Selective D2-antagonist clebopride applied in subthreshold for motor activity doses caused a decrease in aspiration of pups to be in contact with a dam. On the basis of features analyzed (latent periods and expression of various behavioral components), the integrated criterion for the estimation of "child's" reactions was suggested. Application of this criterion made it possible to neutralize high individual variability of the behavior typical of newborns.

  1. Failure criterion for materials with spatially correlated mechanical properties

    NASA Astrophysics Data System (ADS)

    Faillettaz, J.; Or, D.

    2015-03-01

    The role of spatially correlated mechanical elements in the failure behavior of heterogeneous materials represented by fiber bundle models (FBMs) was evaluated systematically for different load redistribution rules. Increasing the range of spatial correlation for FBMs with local load sharing is marked by a transition from ductilelike failure characteristics into brittlelike failure. The study identified a global failure criterion based on macroscopic properties (external load and cumulative damage) that is independent of spatial correlation or load redistribution rules. This general metric could be applied to assess the mechanical stability of complex and heterogeneous systems and thus provide an important component for early warning of a class of geophysical ruptures.

  2. Calculation of strained BaTiO3 with different exchange correlation functionals examined with criterion by Ginzburg-Landau theory, uncovering expressions by crystallographic parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Yukio

    2018-05-01

    In the calculations of tetragonal BaTiO3, some exchange-correlation (XC) energy functionals such as local density approximation (LDA) have shown good agreement with experiments at room temperature (RT), e.g., spontaneous polarization (PS), and superiority compared with other XC functionals. This is due to the error compensation of the RT effect and, hence, will be ineffective in the heavily strained case such as domain boundaries. Here, ferroelectrics under large strain at RT are approximated as those at 0 K because the strain effect surpasses the RT effects. To find effective XC energy functionals for strained BaTiO3, we propose a new comparison, i.e., a criterion. This criterion is the properties at 0 K given by the Ginzburg-Landau (GL) theory because GL theory is a thermodynamic description of experiments working under the same symmetry-constraints as ab initio calculations. With this criterion, we examine LDA, generalized gradient approximations (GGA), meta-GGA, meta-GGA + local correlation potential (U), and hybrid functionals, which reveals the high accuracy of some XC functionals superior to XC functionals that have been regarded as accurate. This result is examined directly by the calculations of homogenously strained tetragonal BaTiO3, confirming the validity of the new criterion. In addition, the data points of theoretical PS vs. certain crystallographic parameters calculated with different XC functionals are found to lie on a single curve, despite their wide variations. Regarding these theoretical data points as corresponding to the experimental results, analytical expressions of the local PS using crystallographic parameters are uncovered. These expressions show the primary origin of BaTiO3 ferroelectricity as oxygen displacements. Elastic compliance and electrostrictive coefficients are estimated. For the comparison of strained results, we show that the effective critical temperature TC under strain <-0.01 is >1000 K from an approximate method combining ab initio results with GL theory. In addition, in a definite manner, the present results show much more enhanced ferroelectricity at large strain than the previous reports.

  3. Application of seasonal auto-regressive integrated moving average model in forecasting the incidence of hand-foot-mouth disease in Wuhan, China.

    PubMed

    Peng, Ying; Yu, Bin; Wang, Peng; Kong, De-Guang; Chen, Bang-Hua; Yang, Xiao-Bing

    2017-12-01

    Outbreaks of hand-foot-mouth disease (HFMD) have occurred many times and caused serious health burden in China since 2008. Application of modern information technology to prediction and early response can be helpful for efficient HFMD prevention and control. A seasonal auto-regressive integrated moving average (ARIMA) model for time series analysis was designed in this study. Eighty-four-month (from January 2009 to December 2015) retrospective data obtained from the Chinese Information System for Disease Prevention and Control were subjected to ARIMA modeling. The coefficient of determination (R 2 ), normalized Bayesian Information Criterion (BIC) and Q-test P value were used to evaluate the goodness-of-fit of constructed models. Subsequently, the best-fitted ARIMA model was applied to predict the expected incidence of HFMD from January 2016 to December 2016. The best-fitted seasonal ARIMA model was identified as (1,0,1)(0,1,1) 12 , with the largest coefficient of determination (R 2 =0.743) and lowest normalized BIC (BIC=3.645) value. The residuals of the model also showed non-significant autocorrelations (P Box-Ljung (Q) =0.299). The predictions by the optimum ARIMA model adequately captured the pattern in the data and exhibited two peaks of activity over the forecast interval, including a major peak during April to June, and again a light peak for September to November. The ARIMA model proposed in this study can forecast HFMD incidence trend effectively, which could provide useful support for future HFMD prevention and control in the study area. Besides, further observations should be added continually into the modeling data set, and parameters of the models should be adjusted accordingly.

  4. The hyperbolic chemical bond: Fourier analysis of ground and first excited state potential energy curves of HX (X = H-Ne).

    PubMed

    Harrison, John A

    2008-09-04

    RHF/aug-cc-pVnZ, UHF/aug-cc-pVnZ, and QCISD/aug-cc-pVnZ, n = 2-5, potential energy curves of H2 X (1) summation g (+) are analyzed by Fourier transform methods after transformation to a new coordinate system via an inverse hyperbolic cosine coordinate mapping. The Fourier frequency domain spectra are interpreted in terms of underlying mathematical behavior giving rise to distinctive features. There is a clear difference between the underlying mathematical nature of the potential energy curves calculated at the HF and full-CI levels. The method is particularly suited to the analysis of potential energy curves obtained at the highest levels of theory because the Fourier spectra are observed to be of a compact nature, with the envelope of the Fourier frequency coefficients decaying in magnitude in an exponential manner. The finite number of Fourier coefficients required to describe the CI curves allows for an optimum sampling strategy to be developed, corresponding to that required for exponential and geometric convergence. The underlying random numerical noise due to the finite convergence criterion is also a clearly identifiable feature in the Fourier spectrum. The methodology is applied to the analysis of MRCI potential energy curves for the ground and first excited states of HX (X = H-Ne). All potential energy curves exhibit structure in the Fourier spectrum consistent with the existence of resonances. The compact nature of the Fourier spectra following the inverse hyperbolic cosine coordinate mapping is highly suggestive that there is some advantage in viewing the chemical bond as having an underlying hyperbolic nature.

  5. Application of the L-curve in geophysical inverse problems: methodologies for the extraction of the optimal parameter

    NASA Astrophysics Data System (ADS)

    Bassrei, A.; Terra, F. A.; Santos, E. T.

    2007-12-01

    Inverse problems in Applied Geophysics are usually ill-posed. One way to reduce such deficiency is through derivative matrices, which are a particular case of a more general family that receive the name regularization. The regularization by derivative matrices has an input parameter called regularization parameter, which choice is already a problem. It was suggested in the 1970's a heuristic approach later called L-curve, with the purpose to provide the optimum regularization parameter. The L-curve is a parametric curve, where each point is associated to a λ parameter. In the horizontal axis one represents the error between the observed data and the calculated one and in the vertical axis one represents the product between the regularization matrix and the estimated model. The ideal point is the L-curve knee, where there is a balance between the quantities represented in the Cartesian axes. The L-curve has been applied to a variety of inverse problems, also in Geophysics. However, the visualization of the knee is not always an easy task, in special when the L-curve does not the L shape. In this work three methodologies are employed for the search and obtainment of the optimal regularization parameter from the L curve. The first criterion is the utilization of Hansen's tool box which extracts λ automatically. The second criterion consists in to extract visually the optimal parameter. By third criterion one understands the construction of the first derivative of the L-curve, and the posterior automatic extraction of the inflexion point. The utilization of the L-curve with the three above criteria were applied and validated in traveltime tomography and 2-D gravity inversion. After many simulations with synthetic data, noise- free as well as data corrupted with noise, with the regularization orders 0, 1, and 2, we verified that the three criteria are valid and provide satisfactory results. The third criterion presented the best performance, specially in cases where the L-curve has an irregular shape.

  6. Correlation of solar wind parameters with Pc5 activity at all local times.

    NASA Astrophysics Data System (ADS)

    Baker, G. J.; Donovan, E. F.; Jackel, B. J.

    2001-12-01

    Using ten years of data from the CANOPUS Churchill line of magnetometers, we investigate the statistical properties of Pc5 pulsations, and the band-limited spectral power in the Pc5 frequency range (ie., 1.7-6.7 mHz). In order to determine the band-limited Pc5 power, we apodize with a 45 minute Hanning window, and detrend the data with the best-fit second order polynomial. For each station, we slide the window along in one minute increments, producing time series of absolute power measurements at one minute intervals. In addition, Pc5 pulsations were identified by eye for six of the seven Churchill line stations. Our criterion for classifying a magnetic perturbation as a Pc5 pulsation was that it was nearly monochromatic, and its amplitude did not decrease over at least four periods. Applying this criterion guarantees that the relative power in the Pc5 band is high. We then have a complete data set of Pc5 powers, and a subset corresponding to times when there were Pc5 pulsations present according to our classification. Initial results show the well known correlation between solar wind speed (IMP 8 one hour averages obtained via OMNIWEB) and Pc5 power. For example, for magnetic local times between 0600 and 1000, we obtain correlation coefficients between the logarithm of the band-limited power and the solar wind speed of 0.72 and 0.77, for the case of the entire data set, and the subset, respectively. In this paper, we present results of multivariate analysis of the Pc5 data base and solar wind data, designed to elucidate correlations at all local times. We discuss our results within the context of earlier studies by Engerbretson et al. [JGR, volume 103, 26721-26283, 1998] and Vennerstrom [JGR, volume 104, 10145-10157, 1999].

  7. F167. ACCESS, UNDERSTAND, APPRAISE AND APPLY TO / OF HEALTH INFORMATION AND HEALTH LITERACY IN INDIVIDUALS AT-RISK FOR PSYCHOSIS: A SYSTEMATIC REVIEW

    PubMed Central

    Seves, Mauro; Haidl, Theresa; Eggers, Susanne; Rostamzadeh, Ayda; Genske, Anna; Jünger, Saskia; Woopen, Christiane; Jessen, Frank; Ruhrmann, Stephan

    2018-01-01

    Abstract Background Numerous studies suggest that health literacy (HL) plays a crucial role in maintaining and improving individual health. Furthermore, empirical findings highlight the relation between levels of a person’s HL and clinical outcomes. So far, there are no reviews, which investigate HL in individuals at-risk for psychosis. The aim of the current review is to assess how individuals at risk of developing a first episode of psychosis gain access to, understand, evaluate and apply risk-related health information. Methods A mixed-methods approach was used to analyze and synthesize a variety of study types including qualitative and quantitative studies. Search strategy, screening and data selection have been carried out according to the PRISMA criteria. The systematic search was applied on peer-reviewed literature in PUBMED, Cochrane Library, PsycINFO and Web of Science. Studies were included if participants met clinical high risk criteria (CHR), including the basic symptom criterion (BS) and the ultra-high risk (UHR) criteria. The UHR criteria comprise the attenuated psychotic symptom criterion (APS), the brief limited psychotic symptom criterion (BLIPS) and the genetic risk and functional decline criterion (GRDP) Furthermore, studies must have used validated HL measures or any operationalization of the HL’s subdimensions (access, understanding, appraisal, decision-making or action) as a primary outcome. A third inclusion criterion comprised that the concept of HL or one of the four dimensions was mentioned in title or abstract. Data extraction and synthesis was implemented according to existing recommendations for appraising evidence from different study types. The quality of the included studies was evaluated and related to the study results. Results The search string returned 10587 papers. After data extraction 15 quantitative as well as 4 qualitative studies and 3 reviews were included. The Quality assessment evaluated 12 publications as “good”, 9 as “fair” and one paper as “poor”. Only one of the studies assessed HL with as primary outcome. In the other studies, the five different subdimensions of HL were investigated as a secondary outcome respectively mentioned in the paper. “Gaining Access” was examined in 18 of the 22 studies. “Understanding” has been assessed in 7 publications. “Appraise” was examined in 9 studies. “Apply decision making” and “Apply health behavior” were investigated in 1 of 8 studies. Since none of the included publications operationalized neither HL nor the subdimensions of HL with a validated measure, no explicit influencing factors could be found. Discussion Quantitative and qualitative evidence indicates that subjects at-risk for psychosis describe a lack of understanding about their state and fear stigmatization that might lead to dysfunctional coping strategies, such as ignoring and hiding symptoms. Affected subjects are eager to be informed about their condition and describe favoured channels for obtaining information. The internet, family members, school personnel and GP’s play a crucial role in gain access to, understand, evaluate and apply risk-related health information. The results clearly highlight that more research should be dedicated to HL in individuals at risk of developing a psychosis. Further studies should explore the relation between HL and clinical outcomes in this target population by assessing the underlining constructs with validated tools.

  8. On the analysis of studies of choice

    PubMed Central

    Mullins, Eamonn; Agunwamba, Christian C.; Donohoe, Anthony J.

    1982-01-01

    In a review of 103 sets of data from 23 different studies of choice, Baum (1979) concluded that whereas undermatching was most commonly observed for responses, the time measure generally conformed to the matching relation. A reexamination of the evidence presented by Baum concludes that undermatching is the most commonly observed finding for both measures. Use of the coefficient of determination by both Baum (1979) and de Villiers (1977) for assessing when matching occurs is criticized on statistical grounds. An alternative to the loss-in-predictability criterion used by Baum (1979) is proposed. This alternative statistic has a simple operational meaning and is related to the usual F-ratio test. It can therefore be used as a formal test of the hypothesis that matching occurs. Baum (1979) also suggests that slope values of between .90 and 1.11 can be considered good approximations to matching. It is argued that the establishment of a fixed interval as a criterion for determining when matching occurs, is inappropriate. A confidence interval based on the data from any given experiment is suggested as a more useful method of assessment. PMID:16812271

  9. Fatigue Life Prediction of Fiber-Reinforced Ceramic-Matrix Composites with Different Fiber Preforms at Room and Elevated Temperatures

    PubMed Central

    Li, Longbiao

    2016-01-01

    In this paper, the fatigue life of fiber-reinforced ceramic-matrix composites (CMCs) with different fiber preforms, i.e., unidirectional, cross-ply, 2D (two dimensional), 2.5D and 3D CMCs at room and elevated temperatures in air and oxidative environments, has been predicted using the micromechanics approach. An effective coefficient of the fiber volume fraction along the loading direction (ECFL) was introduced to describe the fiber architecture of preforms. The statistical matrix multicracking model and fracture mechanics interface debonding criterion were used to determine the matrix crack spacing and interface debonded length. Under cyclic fatigue loading, the fiber broken fraction was determined by combining the interface wear model and fiber statistical failure model at room temperature, and interface/fiber oxidation model, interface wear model and fiber statistical failure model at elevated temperatures, based on the assumption that the fiber strength is subjected to two-parameter Weibull distribution and the load carried by broken and intact fibers satisfies the Global Load Sharing (GLS) criterion. When the broken fiber fraction approaches the critical value, the composites fatigue fracture. PMID:28773332

  10. Turkish Version of Kolcaba's Immobilization Comfort Questionnaire: A Validity and Reliability Study.

    PubMed

    Tosun, Betül; Aslan, Özlem; Tunay, Servet; Akyüz, Aygül; Özkan, Hüseyin; Bek, Doğan; Açıksöz, Semra

    2015-12-01

    The purpose of this study was to determine the validity and reliability of the Turkish version of the Immobilization Comfort Questionnaire (ICQ). The sample used in this methodological study consisted of 121 patients undergoing lower extremity arthroscopy in a training and research hospital. The validity study of the questionnaire assessed language validity, structural validity and criterion validity. Structural validity was evaluated via exploratory factor analysis. Criterion validity was evaluated by assessing the correlation between the visual analog scale (VAS) scores (i.e., the comfort and pain VAS scores) and the ICQ scores using Spearman's correlation test. The Kaiser-Meyer-Olkin coefficient and Bartlett's test of sphericity were used to determine the suitability of the data for factor analysis. Internal consistency was evaluated to determine reliability. The data were analyzed with SPSS version 15.00 for Windows. Descriptive statistics were presented as frequencies, percentages, means and standard deviations. A p value ≤ .05 was considered statistically significant. A moderate positive correlation was found between the ICQ scores and the VAS comfort scores; a moderate negative correlation was found between the ICQ and the VAS pain measures in the criterion validity analysis. Cronbach α values of .75 and .82 were found for the first and second measurements, respectively. The findings of this study reveal that the ICQ is a valid and reliable tool for assessing the comfort of patients in Turkey who are immobilized because of lower extremity orthopedic problems. Copyright © 2015. Published by Elsevier B.V.

  11. Validity, responsiveness, minimal detectable change, and minimal clinically important change of the Pediatric Motor Activity Log in children with cerebral palsy.

    PubMed

    Lin, Keh-chung; Chen, Hui-fang; Chen, Chia-ling; Wang, Tien-ni; Wu, Ching-yi; Hsieh, Yu-wei; Wu, Li-ling

    2012-01-01

    This study examined criterion-related validity and clinimetric properties of the Pediatric Motor Activity Log (PMAL) in children with cerebral palsy. Study participants were 41 children (age range: 28-113 months) and their parents. Criterion-related validity was evaluated by the associations between the PMAL and criterion measures at baseline and posttreatment, including the self-care, mobility, and cognition subscale, the total performance of the Functional Independence Measure in children (WeeFIM), and the grasping and visual-motor integration of the Peabody Developmental Motor Scales. Pearson correlation coefficients were calculated. Responsiveness was examined using the paired t test and the standardized response mean, the minimal detectable change was captured at the 90% confidence level, and the minimal clinically important change was estimated using anchor-based and distribution-based approaches. The PMAL-QOM showed fair concurrent validity at pretreatment and posttreatment and predictive validity, whereas the PMAL-AOU had fair concurrent validity at posttreatment only. The PMAL-AOU and PMAL-QOM were both markedly responsive to change after treatment. Improvement of at least 0.67 points on the PMAL-AOU and 0.66 points on the PMAL-QOM can be considered as a true change, not measurement error. A mean change has to exceed the range of 0.39-0.94 on the PMAL-AOU and the range of 0.38-0.74 on the PMAL-QOM to be regarded as clinically important change. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Fertigation uniformity under sprinkler irrigation: evaluation and analysis

    USDA-ARS?s Scientific Manuscript database

    n modern farming systems, fertigation is widely practiced as a cost effective and convenient method for applying soluble fertilizers to crops. Along with efficiency and adequacy, uniformity is an important fertigation performance evaluation criterion. Fertigation uniformity is defined here as a comp...

  13. An Approach to the Evaluation of Hypermedia.

    ERIC Educational Resources Information Center

    Knussen, Christina; And Others

    1991-01-01

    Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…

  14. Advanced training of specialists in area of fiber-optic communication lines maintenance

    NASA Astrophysics Data System (ADS)

    Andreev, Vladimir A.; Voronkov, Andrey A.; Bukashkin, Sergey A.; Buzova, Maria A.

    2017-04-01

    The paper considers the concept of fiber-optic communication lines (FOCL) maintenance. Performance criterion of FOCL technical maintenance was proposed. For the first time the algorithm for evaluation of the FOCL maintenance efficiency at telecommunication specialists training was applied.

  15. 34 CFR 658.33 - What additional criterion does the Secretary apply to applications from organizations and...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...

  16. 34 CFR 658.33 - What additional criterion does the Secretary apply to applications from organizations and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...

  17. 34 CFR 658.33 - What additional criterion does the Secretary apply to applications from organizations and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...

  18. 34 CFR 658.33 - What additional criterion does the Secretary apply to applications from organizations and...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...

  19. Correlations for Boundary-Layer Transition on Mars Science Laboratory Entry Vehicle Due to Heat-Shield Cavities

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Liechty, Derek S.

    2008-01-01

    The influence of cavities (for attachment bolts) on the heat-shield of the proposed Mars Science Laboratory entry vehicle has been investigated experimentally and computationally in order to develop a criterion for assessing whether the boundary layer becomes turbulent downstream of the cavity. Wind tunnel tests were conducted on the 70-deg sphere-cone vehicle geometry with various cavity sizes and locations in order to assess their influence on convective heating and boundary layer transition. Heat-transfer coefficients and boundary-layer states (laminar, transitional, or turbulent) were determined using global phosphor thermography.

  20. Stability analysis of a run-of-river diversion hydropower plant with surge tank and spillway in the head pond.

    PubMed

    Sarasúa, José Ignacio; Elías, Paz; Martínez-Lucas, Guillermo; Pérez-Díaz, Juan Ignacio; Wilhelmi, José Román; Sánchez, José Ángel

    2014-01-01

    Run-of-river hydropower plants usually lack significant storage capacity; therefore, the more adequate control strategy would consist of keeping a constant water level at the intake pond in order to harness the maximum amount of energy from the river flow or to reduce the surface flooded in the head pond. In this paper, a standard PI control system of a run-of-river diversion hydropower plant with surge tank and a spillway in the head pond that evacuates part of the river flow plant is studied. A stability analysis based on the Routh-Hurwitz criterion is carried out and a practical criterion for tuning the gains of the PI controller is proposed. Conclusions about the head pond and surge tank areas are drawn from the stability analysis. Finally, this criterion is applied to a real hydropower plant in design state; the importance of considering the spillway dimensions and turbine characteristic curves for adequate tuning of the controller gains is highlighted.

  1. Stability Analysis of a Run-of-River Diversion Hydropower Plant with Surge Tank and Spillway in the Head Pond

    PubMed Central

    Sarasúa, José Ignacio; Elías, Paz; Wilhelmi, José Román; Sánchez, José Ángel

    2014-01-01

    Run-of-river hydropower plants usually lack significant storage capacity; therefore, the more adequate control strategy would consist of keeping a constant water level at the intake pond in order to harness the maximum amount of energy from the river flow or to reduce the surface flooded in the head pond. In this paper, a standard PI control system of a run-of-river diversion hydropower plant with surge tank and a spillway in the head pond that evacuates part of the river flow plant is studied. A stability analysis based on the Routh-Hurwitz criterion is carried out and a practical criterion for tuning the gains of the PI controller is proposed. Conclusions about the head pond and surge tank areas are drawn from the stability analysis. Finally, this criterion is applied to a real hydropower plant in design state; the importance of considering the spillway dimensions and turbine characteristic curves for adequate tuning of the controller gains is highlighted. PMID:25405237

  2. Acquisition of control skill with delayed and compensated displays.

    PubMed

    Ricard, G L

    1995-09-01

    The difficulty of mastering a two-axis, compensatory, manual control task was manipulated by introducing transport delays into the feedback loop of the controlled element. Realistic aircraft dynamics were used. Subjects' display was a simulation of an "inside-out" artificial horizon instrument perturbed by atmospheric turbulence. The task was to maintain straight and level flight, and delays tested were representative of those found in current training simulators. Delay compensations in the form of first-order lead and first-order lead/lag transfer functions, along with an uncompensated condition, were factorially combined with added delays. Subjects were required to meet a relatively strict criterion for performance. Control activity showed no differences during criterion performance, but the trials needed to achieve the criterion were linearly related to the magnitude of the delay and the compensation condition. These data were collected in the context of aircraft attitude control, but the results can be applied to the simulation of other vehicles, to remote manipulation, and to maneuvering in graphical environments.

  3. Transition Theory and Experimental Comparisons on (I) Amplification into Streets and (II) a Strongly Nonlinear Break-up Criterion

    NASA Astrophysics Data System (ADS)

    Smith, F. T.; Bowles, R. I.

    1992-10-01

    The two stages I, II are studied by using recent nonlinear theory and then compared with the experiments of Nishioka et al. (1979) on the transition of plane Poiseuille flow. The first stage I starts at low amplitude from warped input, which is deformed through weakly nonlinear interaction into a blow-up in amplitude and phase accompanied by spanwise focusing into streets. This leads into the strongly nonlinear stage II. It holds for a broad range of interactive boundary layers and related flows, to all of which the nonlinear break-up criterion applies. The experimental comparisons on I, II for channel flow overall show encouraging quantitative agreement, supporting recent comparisons (in the boundary-layer setting) of the description of stage I in Stewart & Smith (1992) with the experiments of Klebanoff & Tidstrom (1959) and of the break-up criterion of Smith (1988a) with the computations of Peridier et al. (1991 a, b).

  4. Development and Psychometric Validation of HIPER-Q to Assess Knowledge of Hypertensive Patients in Cardiac Rehabilitation.

    PubMed

    Santos, Rafaella Zulianello Dos; Bonin, Christiani Decker Batista; Martins, Eliara Ten Caten; Pereira Junior, Moacir; Ghisi, Gabriela Lima de Melo; Macedo, Kassia Rosangela Paz de; Benetti, Magnus

    2018-01-01

    The absence of instruments capable of measuring the level of knowledge of hypertensive patients in cardiac rehabilitation programs about their disease reflects the lack of specific recommendations for these patients. To develop and validate a questionnaire to evaluate the knowledge of hypertensive patients in cardiac rehabilitation programs about their disease. A total of 184 hypertensive patients (mean age 60.5 ± 10 years, 66.8% men) were evaluated. Reproducibility was assessed by calculation of the intraclass correlation coefficient using the test-retest method. Internal consistency was assessed by the Cronbach's alpha and the construct validity by the exploratory factorial analysis. The final version of the instrument had 17 questions organized in areas considered important for patient education. The instrument proposed showed a clarity index of 8.7 (0.25). The intraclass correlation coefficient was 0.804 and the Cronbach's correlation coefficient was 0.648. Factor analysis revealed five factors associated with knowledge areas. Regarding the criterion validity, patients with higher education level and higher family income showed greater knowledge about hypertension. The instrument has a satisfactory clarity index and adequate validity, and can be used to evaluate the knowledge of hypertensive participants in cardiac rehabilitation programs.

  5. Friction on a granular-continuum interface: Effects of granular media

    NASA Astrophysics Data System (ADS)

    Ecke, Robert; Geller, Drew

    We consider the frictional interactions of two soft plates with interposed granular material subject to normal and shear forces. The plates are soft photo-elastic material, have length 50 cm, and are separated by a gap of variable width from 0 to 20 granular particle diameters. The granular materials are two-dimensional rods that are bi-dispersed in size to prevent crystallization. Different rod materials with frictional coefficients between 0 . 04 < μ < 0 . 5 are used to explore the effects of inter-granular friction on the effective friction of a granular medium. The gap is varied to test the dependence of the friction coefficient on the thickness of the granular layer. Because the soft plates absorb most of the displacement associated with the compressional normal force, the granular packing fractions are close to a jamming threshold, probably a shear jamming criterion. The overall shear and normal forces are measured using force sensors and the local strain tensor over a central portion of the gap is obtained using relative displacements of fiducial markers on the soft elastic material. These measurements provide a good characterization of the global and local forces giving rise to an effective friction coefficient. Funded by US DOE LDRD Program.

  6. Validation of the Hebrew version of the Burn Specific Health Scale-Brief questionnaire.

    PubMed

    Stavrou, Demetris; Haik, Josef; Wiser, Itay; Winkler, Eyal; Liran, Alon; Holloway, Samantha; Boyd, Julie; Zilinsky, Isaac; Weissman, Oren

    2015-02-01

    The Burns Specific Health Scale-Brief (BSHS-B) questionnaire is a suitable measurement tool for the assessment of general, physical, mental, and social health aspects of the burn survivor. To translate, culturally adapt and validate the BSHS-B to Hebrew (BSHS-H), and to investigate its psychometric properties. Eighty-six Hebrew speaking burn survivors filled out the BSHS-B and SF-36 questionnaires. Ten of them (11.63%) completed a retest. The psychometric properties of the scale were evaluated. Internal consistency, criterion validity, and construct validity were assessed using interclass correlation coefficient, Cronbach's alpha statistic, Spearman rank test, and Mann-Whitney U test respectively. BSHS-H Cronbach's alpha coefficient was 0.97. Test-retest interclass coefficients were between 0.81 and 0.98. BSHS-H was able to discriminate between facial burns, hand burns and burns >10% body surface area (p<0.05). BSHS-H and SF-36 were positively correlated (r(2)=0.667, p<0.01). BSHS-H is a reliable and valid instrument for use in the Israeli burn survivor population. The translation and cross-cultural adaptation of this disease specific scale allows future comparative international studies. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.

  7. A new approach to formulating and appraising drug policy: A multi-criterion decision analysis applied to alcohol and cannabis regulation.

    PubMed

    Rogeberg, Ole; Bergsvik, Daniel; Phillips, Lawrence D; van Amsterdam, Jan; Eastwood, Niamh; Henderson, Graeme; Lynskey, Micheal; Measham, Fiona; Ponton, Rhys; Rolles, Steve; Schlag, Anne Katrin; Taylor, Polly; Nutt, David

    2018-02-16

    Drug policy, whether for legal or illegal substances, is a controversial field that encompasses many complex issues. Policies can have effects on a myriad of outcomes and stakeholders differ in the outcomes they consider and value, while relevant knowledge on policy effects is dispersed across multiple research disciplines making integrated judgements difficult. Experts on drug harms, addiction, criminology and drug policy were invited to a decision conference to develop a multi-criterion decision analysis (MCDA) model for appraising alternative regulatory regimes. Participants collectively defined regulatory regimes and identified outcome criteria reflecting ethical and normative concerns. For cannabis and alcohol separately, participants evaluated each regulatory regime on each criterion and weighted the criteria to provide summary scores for comparing different regimes. Four generic regulatory regimes were defined: absolute prohibition, decriminalisation, state control and free market. Participants also identified 27 relevant criteria which were organised into seven thematically related clusters. State control was the preferred regime for both alcohol and cannabis. The ranking of the regimes was robust to variations in the criterion-specific weights. The MCDA process allowed the participants to deconstruct complex drug policy issues into a set of simpler judgements that led to consensus about the results. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Improved methods for dewarping images in convex mirrors in fine art: applications to van Eyck and Parmigianino

    NASA Astrophysics Data System (ADS)

    Usami, Yumi; Stork, David G.; Fujiki, Jun; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2011-03-01

    We derive and demonstrate new methods for dewarping images depicted in convex mirrors in artwork and for estimating the three-dimensional shapes of the mirrors themselves. Previous methods were based on the assumption that mirrors were spherical or paraboloidal, an assumption unlikely to hold for hand-blown glass spheres used in early Renaissance art, such as Johannes van Eyck's Portrait of Giovanni (?) Arnolfini and his wife (1434) and Robert Campin's Portrait of St. John the Baptist and Heinrich von Werl (1438). Our methods are more general than such previous methods in that we assume merely that the mirror is radially symmetric and that there are straight lines (or colinear points) in the actual source scene. We express the mirror's shape as a mathematical series and pose the image dewarping task as that of estimating the coefficients in the series expansion. Central to our method is the plumbline principle: that the optimal coefficients are those that dewarp the mirror image so as to straighten lines that correspond to straight lines in the source scene. We solve for these coefficients algebraically through principal component analysis, PCA. Our method relies on a global figure of merit to balance warping errors throughout the image and it thereby reduces a reliance on the somewhat subjective criterion used in earlier methods. Our estimation can be applied to separate image annuli, which is appropriate if the mirror shape is irregular. Once we have found the optimal image dewarping, we compute the mirror shape by solving a differential equation based on the estimated dewarping function. We demonstrate our methods on the Arnolfini mirror and reveal a dewarped image superior to those found in prior work|an image noticeably more rectilinear throughout and having a more coherent geometrical perspective and vanishing points. Moreover, we find the mirror deviated from spherical and paraboloidal shape; this implies that it would have been useless as a concave projection mirror, as has been claimed. Our dewarped image can be compared to the geometry in the full Arnolfini painting; the geometrical agreement strongly suggests that van Eyck worked from an actual room, not, as has been suggested by some art historians, a "fictive" room of his imagination. We apply our method to other mirrors depicted in art, such as Parmigianino's Self-portrait in a convex mirror and compare our results to those from earlier computer graphics simulations.

  9. 34 CFR 489.5 - What definitions apply?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., DEPARTMENT OF EDUCATION FUNCTIONAL LITERACY FOR STATE AND LOCAL PRISONERS PROGRAM General § 489.5 What...— Functional literacy means at least an eighth grade equivalence, or a functional criterion score, on a nationally recognized literacy assessment. Local correctional agency means any agency of local government...

  10. Envelopes of Sets of Measures, Tightness, and Markov Control Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Hernandez, J.; Hernandez-Lerma, O.

    1999-11-15

    We introduce upper and lower envelopes for sets of measures on an arbitrary topological space, which are then used to give a tightness criterion. These concepts are applied to show the existence of optimal policies for a class of Markov control processes.

  11. The brief multidimensional students' life satisfaction scale-college version.

    PubMed

    Zullig, Keith J; Huebner, E Scott; Patton, Jon M; Murray, Karen A

    2009-01-01

    To investigate the psychometric properties of the BMSLSS-College among 723 college students. Internal consistency estimates explored scale reliability, factor analysis explored construct validity, and known-groups validity was assessed using the National College Youth Risk Behavior Survey and Harvard School of Public Health College Alcohol Study. Criterion-related validity was explored through analyses with the CDC's health-related quality of life scale and a social isolation scale. Acceptable internal consistency reliability, construct, known-groups, and criterion-related validity were established. Findings offer preliminary support for the BMSLSS-C; it could be useful in large-scale research studies, applied screening contexts, and for program evaluation purposes toward achieving Healthy People 2010 objectives.

  12. The role of public and private transfers in the cost-benefit analysis of mental health programs.

    PubMed

    Brent, Robert J

    2004-11-01

    This paper revisits the issue of whether to include maintenance costs in an economic evaluation in mental health. The source of these maintenance costs may be public or private transfers. The issue is discussed in terms of a formal cost-benefit criterion. It is shown that, when transfers have productivity effects, income distribution is important, and one recognizes that public transfers have tax implications, transfers can have real resource effects and cannot be ignored. The criterion is then applied to an evaluation of three case management programs in California that sought to reduce the intensive hospitalization of the severely mentally ill. 2004 John Wiley & Sons, Ltd.

  13. Psychometric properties of the brief version of the Fear of Negative Evaluation Scale in a Turkish sample.

    PubMed

    Koydemir, Selda; Demir, Ayhan

    2007-06-01

    The purpose of the study was to report initial data on the psychometric properties of the Brief Fear of Negative Evaluation Scale. The scale was applied to a nonclinical sample of 250 (137 women, 113 men) Turkish undergraduate students selected randomly from Middle East Technical University. Their mean age was 20.4 yr. (SD= 1.9). The factor structure of the Turkish version, its criterion validity, and internal reliability coefficients were assessed. Although maximum likelihood factor analysis initially indicated that the scale had only one factor, a forced two-factor solution accounted for more variance (61%) in scale scores than a single factor. The straightforward items loaded on the first factor, and the reverse-coded items loaded on the second factor. The total score was significantly positively correlated with scores on the Revised Cheek and Buss Shyness Scale and significantly negatively correlated with scores on the Rosenberg Self-Esteem Scale. Factor 1 (straightforward items) correlated more highly with both Shyness and Self-esteem than Factor 2 (reverse-coded items). Internal consistency estimate was .94 for the Total scores, .91 for the Factor 1 (straightforward items), and .87 for the Factor 2 (reverse-coded items). No sex differences were evident for Fear of Negative Evaluation.

  14. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1995-01-01

    Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

  15. Road traffic accidents prediction modelling: An analysis of Anambra State, Nigeria.

    PubMed

    Ihueze, Chukwutoo C; Onwurah, Uchendu O

    2018-03-01

    One of the major problems in the world today is the rate of road traffic crashes and deaths on our roads. Majority of these deaths occur in low-and-middle income countries including Nigeria. This study analyzed road traffic crashes in Anambra State, Nigeria with the intention of developing accurate predictive models for forecasting crash frequency in the State using autoregressive integrated moving average (ARIMA) and autoregressive integrated moving average with explanatory variables (ARIMAX) modelling techniques. The result showed that ARIMAX model outperformed the ARIMA (1,1,1) model generated when their performances were compared using the lower Bayesian information criterion, mean absolute percentage error, root mean square error; and higher coefficient of determination (R-Squared) as accuracy measures. The findings of this study reveal that incorporating human, vehicle and environmental related factors in time series analysis of crash dataset produces a more robust predictive model than solely using aggregated crash count. This study contributes to the body of knowledge on road traffic safety and provides an approach to forecasting using many human, vehicle and environmental factors. The recommendations made in this study if applied will help in reducing the number of road traffic crashes in Nigeria. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Examining spectral properties of Landsat 8 OLI for predicting above-ground carbon of Labanan Forest, Berau

    NASA Astrophysics Data System (ADS)

    Suhardiman, A.; Tampubolon, B. A.; Sumaryono, M.

    2018-04-01

    Many studies revealed significant correlation between satellite image properties and forest data attributes such as stand volume, biomass or carbon stock. However, further study is still relevant due to advancement of remote sensing technology as well as improvement on methods of data analysis. In this study, the properties of three vegetation indices derived from Landsat 8 OLI were tested upon above-ground carbon stock data from 50 circular sample plots (30-meter radius) from ground survey in PT. Inhutani I forest concession in Labanan, Berau, East Kalimantan. Correlation analysis using Pearson method exhibited a promising results when the coefficient of correlation (r-value) was higher than 0.5. Further regression analysis was carried out to develop mathematical model describing the correlation between sample plots data and vegetation index image using various mathematical models.Power and exponential model were demonstrated a good result for all vegetation indices. In order to choose the most adequate mathematical model for predicting Above-ground Carbon (AGC), the Bayesian Information Criterion (BIC) was applied. The lowest BIC value (i.e. -376.41) shown by Transformed Vegetation Index (TVI) indicates this formula, AGC = 9.608*TVI21.54, is the best predictor of AGC of study area.

  17. Volatility modeling for IDR exchange rate through APARCH model with student-t distribution

    NASA Astrophysics Data System (ADS)

    Nugroho, Didit Budi; Susanto, Bambang

    2017-08-01

    The aim of this study is to empirically investigate the performance of APARCH(1,1) volatility model with the Student-t error distribution on five foreign currency selling rates to Indonesian rupiah (IDR), including the Swiss franc (CHF), the Euro (EUR), the British pound (GBP), Japanese yen (JPY), and the US dollar (USD). Six years daily closing rates over the period of January 2010 to December 2016 for a total number of 1722 observations have analysed. The Bayesian inference using the efficient independence chain Metropolis-Hastings and adaptive random walk Metropolis methods in the Markov chain Monte Carlo (MCMC) scheme has been applied to estimate the parameters of model. According to the DIC criterion, this study has found that the APARCH(1,1) model under Student-t distribution is a better fit than the model under normal distribution for any observed rate return series. The 95% highest posterior density interval suggested the APARCH models to model the IDR/JPY and IDR/USD volatilities. In particular, the IDR/JPY and IDR/USD data, respectively, have significant negative and positive leverage effect in the rate returns. Meanwhile, the optimal power coefficient of volatility has been found to be statistically different from 2 in adopting all rate return series, save the IDR/EUR rate return series.

  18. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  19. Lock-in in forced vibration of a circular cylinder

    NASA Astrophysics Data System (ADS)

    Kumar, Samvit; Navrose, Mittal, Sanjay

    2016-11-01

    The phenomenon of lock-in/synchronization in uniform flow past an oscillating cylinder is investigated via a stabilized finite element method at Re = 100. Computations are carried out for various amplitudes and frequencies of cylinder oscillation to accurately obtain the boundary of the lock-in regime. Results from earlier studies show a significant scatter in the lock-in boundary. The scatter might be an outcome of the difference in data collection or the use of a different criterion for identifying lock-in. A new criterion for lock-in is proposed, wherein the following two conditions are to be satisfied. (i) The most dominant frequency in the power spectrum of lift coefficient matches the frequency of cylinder oscillation (fy) and (ii) other peaks in the power spectrum, if any, are present only at super-harmonics of fy. Utilizing this criterion, three flow regimes are identified on the frequency-amplitude plane: lock-in, transition, and no lock-in. The behaviour of the wake is also investigated by examining the power spectra of the velocity traces at various locations downstream of the cylinder. Wake-lock-in is observed during lock-in. A wake-transition regime is identified wherein the near wake, up to a certain streamwise location, is in a lock-in state while the downstream region is in a desynchronized state. For a fixed fy, the location beyond which the wake is desynchronized moves downstream as the amplitude of oscillation is increased. The proposed criterion for lock-in addresses the wide scatter in the boundary of the lock-in regime among earlier studies. Energy transfer from the fluid to the structure, per cycle of cylinder oscillation, is computed from the data for forced vibration. The data is utilized to generate iso-energy transfer contours in the frequency-amplitude plane. The free vibration response with zero structural damping is found to be in very good agreement with the contour corresponding to zero energy transfer.

  20. Validity and reliability of a simple, low cost measure to quantify children’s dietary intake in afterschool settings

    PubMed Central

    Davison, Kirsten K.; Austin, S. Bryn; Giles, Catherine; Cradock, Angie L.; Lee, Rebekka M.; Gortmaker, Steven L.

    2017-01-01

    Interest in evaluating and improving children’s diets in afterschool settings has grown, necessitating the development of feasible yet valid measures for capturing children’s intake in such settings. This study’s purpose was to test the criterion validity and cost of three unobtrusive visual estimation methods compared to a plate-weighing method: direct on-site observation using a 4-category rating scale and off-site rating of digital photographs taken on-site using 4- and 10-category scales. Participants were 111 children in grades 1–6 attending four afterschool programs in Boston, MA in December 2011. Researchers observed and photographed 174 total snack meals consumed across two days at each program. Visual estimates of consumption were compared to weighed estimates (the criterion measure) using intra-class correlations. All three methods were highly correlated with the criterion measure, ranging from 0.92–0.94 for total calories consumed, 0.86–0.94 for consumption of pre-packaged beverages, 0.90–0.93 for consumption of fruits/vegetables, and 0.92–0.96 for consumption of grains. For water, which was not pre-portioned, coefficients ranged from 0.47–0.52. The photographic methods also demonstrated excellent inter-rater reliability: 0.84–0.92 for the 4-point and 0.92–0.95 for the 10-point scale. The costs of the methods for estimating intake ranged from $0.62 per observation for the on-site direct visual method to $0.95 per observation for the criterion measure. This study demonstrates that feasible, inexpensive methods can validly and reliably measure children’s dietary intake in afterschool settings. Improving precision in measures of children’s dietary intake can reduce the likelihood of spurious or null findings in future studies. PMID:25596895

  1. Psychometric properties of the Spanish version of the mindful attention awareness scale (MAAS) in patients with fibromyalgia

    PubMed Central

    2013-01-01

    Background Mindful-based interventions improve functioning and quality of life in fibromyalgia (FM) patients. The aim of the study is to perform a psychometric analysis of the Spanish version of the Mindful Attention Awareness Scale (MAAS) in a sample of patients diagnosed with FM. Methods The following measures were administered to 251 Spanish patients with FM: the Spanish version of MAAS, the Chronic Pain Acceptance Questionnaire, the Pain Catastrophising Scale, the Injustice Experience Questionnaire, the Psychological Inflexibility in Pain Scale, the Fibromyalgia Impact Questionnaire and the Euroqol. Factorial structure was analysed using Confirmatory Factor Analyses (CFA). Cronbach's α coefficient was calculated to examine internal consistency, and the intraclass correlation coefficient (ICC) was calculated to assess the test-retest reliability of the measures. Pearson’s correlation tests were run to evaluate univariate relationships between scores on the MAAS and criterion variables. Results The MAAS scores in our sample were low (M = 56.7; SD = 17.5). CFA confirmed a two-factor structure, with the following fit indices [sbX2 = 172.34 (p < 0.001), CFI = 0.95, GFI = 0.90, SRMR = 0.05, RMSEA = 0.06. MAAS was found to have high internal consistency (Cronbach’s α = 0.90) and adequate test-retest reliability at a 1–2 week interval (ICC = 0.90). It showed significant and expected correlations with the criterion measures with the exception of the Euroqol (Pearson = 0.15). Conclusion Psychometric properties of the Spanish version of the MAAS in patients with FM are adequate. The dimensionality of the MAAS found in this sample and directions for future research are discussed. PMID:23317306

  2. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.

    PubMed

    López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J

    2015-04-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Performance of the libraries in Isfahan University of Medical Sciences based on the EFQM model.

    PubMed

    Karimi, Saeid; Atashpour, Bahareh; Papi, Ahmad; Nouri, Rasul; Hasanzade, Akbar

    2014-01-01

    Performance measurement is inevitable for university libraries. Hence, planning and establishing a constant and up-to-date measurement system is required for the libraries, especially the university libraries. The primary studies and analyses reveal that the EFQM Excellence Model has been efficient, and the administrative reform program has focused on the implementation of this model. Therefore, on the basis of these facts as well as the need for a measurement system, the researchers measured the performance of libraries in schools and hospitals supported by Isfahan University of Medical Sciences, using the EFQM Organizational Excellence Model. This descriptive research study was carried out by a cross-sectional survey method in 2011. This research study included librarians and library directors of Isfahan University of Medical Sciences (70 people). The validity of the instrument was measured by the specialists in the field of Management and Library Science. To measure the reliability of the questionnaire, the Cronbach's alpha coefficient value was measured (0.93). The t-test, ANOVA, and Spearman's rank correlation coefficient were used for measurements. The data were analyzed by SPSS. Data analysis revealed that the mean score of the performance measurement for the libraries under study and between nine dimensions the highest score was 65.3% for leadership dimension and the lowest scores were 55.1% for people and 55.1% for society results. In general, using the ninth EFQM model the average level of all dimensions, which is in good agreement with normal values, was assessed. However, compared to other results, the criterion people and society results were poor. It is Recommended by forming the expert committee on criterion people and society results by individuals concerned with the various conferences and training courses to improve the aspects.

  4. Verification of the reliability and validity of a Japanese version of the Quality of Life in Childhood Epilepsy Questionnaire (QOLCE-J).

    PubMed

    Moriguchi, Eri; Ito, Mikiko; Nagai, Toshisaburo

    2015-11-01

    A Japanese version of the Quality of Life in Childhood Epilepsy Questionnaire (QOLCE-J) was developed using international guidelines as a QOL scale for childhood epilepsy; its reliability and validity were examined, focusing on Japanese pediatric epilepsy patients applicability. A pilot test questionnaire survey was conducted; involving parents of pediatric epilepsy patients aged 4-15 undergoing outpatient treatment. 278 responses were obtained and analyzed. Internal consistency for the 16 QOLCE-J subscales, except for , was sufficient, and a high overall coefficient α was obtained. The intraclass correlation coefficient was also high, supporting the test-retest reliability of this version. Associations among the subscales, high correlations of r>0.7 were observed among , , and , representing cognitive and behavioral aspects, and among these and . In contrast, correlations among others were moderate or weaker. Furthermore, correlations of r>0.35 were observed among the subscales of the SDQ (Strength and Difficulties Questionnaire) used as an external criterion and the QOLCE-J, confirming the criterion validity of the study version. Analysis of associations between the total QOLCE-J score and pathology of epilepsy, found significant correlation with age of onset and frequency of seizures, ADL, and antiepileptics side effects' symptoms. QOLCE has mostly been used in treatment resistant pediatric patients, the influence of interictal period presently observed, like antiepileptic side effects' symptoms; suggest usefulness for pediatric patients with seizures under control. The QOLCE-J with sufficient reliability and validity may be applicable as a QOL scale for Japanese children with epilepsy. Copyright © 2015 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  5. The Multimedia Activity Recall for Children and Adolescents (MARCA): development and evaluation.

    PubMed

    Ridley, Kate; Olds, Tim S; Hill, Alison

    2006-05-26

    Self-report recall questionnaires are commonly used to measure physical activity, energy expenditure and time use in children and adolescents. However, self-report questionnaires show low to moderate validity, mainly due to inaccuracies in recalling activity in terms of duration and intensity. Aside from recall errors, inaccuracies in estimating energy expenditure from self-report questionnaires are compounded by a lack of data on the energy cost of everyday activities in children and adolescents. This article describes the development of the Multimedia Activity Recall for Children and Adolescents (MARCA), a computer-delivered use-of-time instrument designed to address both the limitations of self-report recall questionnaires in children, and the lack of energy cost data in children. The test-retest reliability of the MARCA was assessed using a sample of 32 children (aged 11.8 +/- 0.7 y) who undertook the MARCA twice within 24-h. Criterion validity was assessed by comparing self-reports with accelerometer counts collected on a sample of 66 children (aged 11.6 +/- 0.8 y). Content and construct validity were assessed by establishing whether data collected using the MARCA on 1429 children (aged 11.9 +/- 0.8 y) exhibited relationships and trends in children's physical activity consistent with established findings from a number of previous research studies. Test-retest reliability was high with intra-class coefficients ranging from 0.88 to 0.94. The MARCA demonstrated criterion validity comparable to other self-report instruments with Spearman coefficients ranging from rho = 0.36 to 0.45, and provided evidence of good content and construct validity. The MARCA is a valid and reliable self-report questionnaire, capable of a wide variety of flexible use-of-time analyses related to both physical activity and sedentary behaviour, and offers advantages over existing pen-and-paper questionnaires.

  6. Estimating Dbh of Trees Employing Multiple Linear Regression of the best Lidar-Derived Parameter Combination Automated in Python in a Natural Broadleaf Forest in the Philippines

    NASA Astrophysics Data System (ADS)

    Ibanez, C. A. G.; Carcellar, B. G., III; Paringit, E. C.; Argamosa, R. J. L.; Faelga, R. A. G.; Posilero, M. A. V.; Zaragosa, G. P.; Dimayacyac, N. A.

    2016-06-01

    Diameter-at-Breast-Height Estimation is a prerequisite in various allometric equations estimating important forestry indices like stem volume, basal area, biomass and carbon stock. LiDAR Technology has a means of directly obtaining different forest parameters, except DBH, from the behavior and characteristics of point cloud unique in different forest classes. Extensive tree inventory was done on a two-hectare established sample plot in Mt. Makiling, Laguna for a natural growth forest. Coordinates, height, and canopy cover were measured and types of species were identified to compare to LiDAR derivatives. Multiple linear regression was used to get LiDAR-derived DBH by integrating field-derived DBH and 27 LiDAR-derived parameters at 20m, 10m, and 5m grid resolutions. To know the best combination of parameters in DBH Estimation, all possible combinations of parameters were generated and automated using python scripts and additional regression related libraries such as Numpy, Scipy, and Scikit learn were used. The combination that yields the highest r-squared or coefficient of determination and lowest AIC (Akaike's Information Criterion) and BIC (Bayesian Information Criterion) was determined to be the best equation. The equation is at its best using 11 parameters at 10mgrid size and at of 0.604 r-squared, 154.04 AIC and 175.08 BIC. Combination of parameters may differ among forest classes for further studies. Additional statistical tests can be supplemented to help determine the correlation among parameters such as Kaiser- Meyer-Olkin (KMO) Coefficient and the Barlett's Test for Spherecity (BTS).

  7. Performance of the libraries in Isfahan University of Medical Sciences based on the EFQM model

    PubMed Central

    Karimi, Saeid; Atashpour, Bahareh; Papi, Ahmad; Nouri, Rasul; Hasanzade, Akbar

    2014-01-01

    Introduction: Performance measurement is inevitable for university libraries. Hence, planning and establishing a constant and up-to-date measurement system is required for the libraries, especially the university libraries. The primary studies and analyses reveal that the EFQM Excellence Model has been efficient, and the administrative reform program has focused on the implementation of this model. Therefore, on the basis of these facts as well as the need for a measurement system, the researchers measured the performance of libraries in schools and hospitals supported by Isfahan University of Medical Sciences, using the EFQM Organizational Excellence Model. Materials and Methods: This descriptive research study was carried out by a cross-sectional survey method in 2011. This research study included librarians and library directors of Isfahan University of Medical Sciences (70 people). The validity of the instrument was measured by the specialists in the field of Management and Library Science. To measure the reliability of the questionnaire, the Cronbach's alpha coefficient value was measured (0.93). The t-test, ANOVA, and Spearman's rank correlation coefficient were used for measurements. The data were analyzed by SPSS. Results: Data analysis revealed that the mean score of the performance measurement for the libraries under study and between nine dimensions the highest score was 65.3% for leadership dimension and the lowest scores were 55.1% for people and 55.1% for society results. Conclusion: In general, using the ninth EFQM model the average level of all dimensions, which is in good agreement with normal values, was assessed. However, compared to other results, the criterion people and society results were poor. It is Recommended by forming the expert committee on criterion people and society results by individuals concerned with the various conferences and training courses to improve the aspects. PMID:25540795

  8. Validity of the occupational sitting and physical activity questionnaire.

    PubMed

    Chau, Josephine Y; Van Der Ploeg, Hidde P; Dunn, Scott; Kurko, John; Bauman, Adrian E

    2012-01-01

    Sitting at work is an emerging occupational health risk. Few instruments designed for use in population-based research measure occupational sitting and standing as distinct behaviors. This study aimed to develop and validate brief measure of occupational sitting and physical activity. A convenience sample (n = 99, 61% female) was recruited from two medium-sized workplaces and by word-of-mouth in Sydney, Australia. Participants completed the newly developed Occupational Sitting and Physical Activity Questionnaire (OSPAQ) and a modified version of the MONICA Optional Study on Physical Activity Questionnaire (modified MOSPA-Q) twice, 1 wk apart. Participants also wore an ActiGraph accelerometer for the 7 d in between the test and retest. Analyses determined test-retest reliability with intraclass correlation coefficients and assessed criterion validity against accelerometers using the Spearman ρ. The test-retest intraclass correlation coefficients for occupational sitting, standing, and walking for OSPAQ ranged from 0.73 to 0.90, while that for the modified MOSPA-Q ranged from 0.54 to 0.89. Comparison of sitting measures with accelerometers showed higher Spearman correlations for the OSPAQ (r = 0.65) than for the modified MOSPA-Q (r = 0.52). Criterion validity correlations for occupational standing and walking measures were comparable for both instruments with accelerometers (standing: r = 0.49; walking: r = 0.27-0.29). The OSPAQ has excellent test-retest reliability and moderate validity for estimating time spent sitting and standing at work and is comparable to existing occupational physical activity measures for assessing time spent walking at work. The OSPAQ brief instrument measures sitting and standing at work as distinct behaviors and would be especially suitable in national health surveys, prospective cohort studies, and other studies that are limited by space constraints for questionnaire items.

  9. German validation of the Conners Adult ADHD Rating Scales (CAARS) II: reliability, validity, diagnostic sensitivity and specificity.

    PubMed

    Christiansen, H; Kis, B; Hirsch, O; Matthies, S; Hebebrand, J; Uekermann, J; Abdel-Hamid, M; Kraemer, M; Wiltfang, J; Graf, E; Colla, M; Sobanski, E; Alm, B; Rösler, M; Jacob, C; Jans, T; Huss, M; Schimmelmann, B G; Philipsen, A

    2012-07-01

    The German version of the Conners Adult ADHD Rating Scales (CAARS) has proven to show very high model fit in confirmative factor analyses with the established factors inattention/memory problems, hyperactivity/restlessness, impulsivity/emotional lability, and problems with self-concept in both large healthy control and ADHD patient samples. This study now presents data on the psychometric properties of the German CAARS-self-report (CAARS-S) and observer-report (CAARS-O) questionnaires. CAARS-S/O and questions on sociodemographic variables were filled out by 466 patients with ADHD, 847 healthy control subjects that already participated in two prior studies, and a total of 896 observer data sets were available. Cronbach's-alpha was calculated to obtain internal reliability coefficients. Pearson correlations were performed to assess test-retest reliability, and concurrent, criterion, and discriminant validity. Receiver Operating Characteristics (ROC-analyses) were used to establish sensitivity and specificity for all subscales. Coefficient alphas ranged from .74 to .95, and test-retest reliability from .85 to .92 for the CAARS-S, and from .65 to .85 for the CAARS-O. All CAARS subscales, except problems with self-concept correlated significantly with the Barrett Impulsiveness Scale (BIS), but not with the Wender Utah Rating Scale (WURS). Criterion validity was established with ADHD subtype and diagnosis based on DSM-IV criteria. Sensitivity and specificity were high for all four subscales. The reported results confirm our previous study and show that the German CAARS-S/O do indeed represent a reliable and cross-culturally valid measure of current ADHD symptoms in adults. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  10. Is self-reporting workplace activity worthwhile? Validity and reliability of occupational sitting and physical activity questionnaire in desk-based workers.

    PubMed

    Pedersen, Scott J; Kitic, Cecilia M; Bird, Marie-Louise; Mainsbridge, Casey P; Cooley, P Dean

    2016-08-19

    With the advent of workplace health and wellbeing programs designed to address prolonged occupational sitting, tools to measure behaviour change within this environment should derive from empirical evidence. In this study we measured aspects of validity and reliability for the Occupational Sitting and Physical Activity Questionnaire that asks employees to recount the percentage of work time they spend in the seated, standing, and walking postures during a typical workday. Three separate cohort samples (N = 236) were drawn from a population of government desk-based employees across several departmental agencies. These volunteers were part of a larger state-wide intervention study. Workplace sitting and physical activity behaviour was measured both subjectively against the International Physical Activity Questionnaire, and objectively against ActivPal accelerometers before the intervention began. Criterion validity and concurrent validity for each of the three posture categories were assessed using Spearman's rank correlation coefficients, and a bias comparison with 95 % limits of agreement. Test-retest reliability of the survey was reported with intraclass correlation coefficients. Criterion validity for this survey was strong for sitting and standing estimates, but weak for walking. Participants significantly overestimated the amount of walking they did at work. Concurrent validity was moderate for sitting and standing, but low for walking. Test-retest reliability of this survey proved to be questionable for our sample. Based on our findings we must caution occupational health and safety professionals about the use of employee self-report data to estimate workplace physical activity. While the survey produced accurate measurements for time spent sitting at work it was more difficult for employees to estimate their workplace physical activity.

  11. Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model

    DOE PAGES

    Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.

    2008-01-01

    This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less

  12. Development of an audit instrument for nursing care plans in the patient record

    PubMed Central

    Bjorvell, C; Thorell-Ekstrand, I; Wredling, R

    2000-01-01

    Objectives—To develop, validate, and test the reliability of an audit instrument that measures the extent to which patient records describe important aspects of nursing care. Material—Twenty records from each of three hospital wards were collected and audited. The auditors were registered nurses with a knowledge of nursing documentation in accordance with the VIPS model—a model designed to structure nursing documentation. (VIPS is an acronym formed from the Swedish words for wellbeing, integrity, prevention, and security.) Methods—An audit instrument was developed by determining specific criteria to be met. The audit questions were aimed at revealing the content of the patient for nursing assessment, nursing diagnosis, planned interventions, and outcome. Each of the 60 records was reviewed by the three auditors independently and the reliability of the instrument was tested by calculating the inter-rater reliability coefficient. Content validity was tested by using an expert panel and calculating the content validity ratio. The criterion related validity was estimated by the correlation between the score of the Cat-ch-Ing instrument and the score of an earlier developed and used audit instrument. The results were then tested by using Pearson's correlation coefficient. Results—The new audit instrument, named Cat-ch-Ing, consists of 17 questions designed to judge the nursing documentation. Both quantity and quality variables are judged on a rating scale from zero to three, with a maximum score of 80. The inter-rater reliability coefficients were 0.98, 0.98, and 0.92, respectively for each group of 20 records, the content validity ratio ranged between 0.20 and 1.0 and the criterion related validity showed a significant correlation of r = 0.68 (p< 0.0001, 95% CI 0.57 to 0.76) between the two audit instruments. Conclusion—The Cat-ch-Ing instrument has proved to be a valid and reliable audit instrument for nursing records when the VIPS model is used as the basis of the documentation. (Quality in Health Care 2000;9:6–13) Key Words: audit instrument; nursing care plans; quality assurance PMID:10848373

  13. Effects of threshold on the topology of gene co-expression networks.

    PubMed

    Couto, Cynthia Martins Villar; Comin, César Henrique; Costa, Luciano da Fontoura

    2017-09-26

    Several developments regarding the analysis of gene co-expression profiles using complex network theory have been reported recently. Such approaches usually start with the construction of an unweighted gene co-expression network, therefore requiring the selection of a suitable threshold defining which pairs of vertices will be connected. We aimed at addressing such an important problem by suggesting and comparing five different approaches for threshold selection. Each of the methods considers a respective biologically-motivated criterion for electing a potentially suitable threshold. A set of 21 microarray experiments from different biological groups was used to investigate the effect of applying the five proposed criteria to several biological situations. For each experiment, we used the Pearson correlation coefficient to measure the relationship between each gene pair, and the resulting weight matrices were thresholded considering several values, generating respective adjacency matrices (co-expression networks). Each of the five proposed criteria was then applied in order to select the respective threshold value. The effects of these thresholding approaches on the topology of the resulting networks were compared by using several measurements, and we verified that, depending on the database, the impact on the topological properties can be large. However, a group of databases was verified to be similarly affected by most of the considered criteria. Based on such results, it can be suggested that when the generated networks present similar measurements, the thresholding method can be chosen with greater freedom. If the generated networks are markedly different, the thresholding method that better suits the interests of each specific research study represents a reasonable choice.

  14. Empirical Validation of Reading Proficiency Guidelines

    ERIC Educational Resources Information Center

    Clifford, Ray; Cox, Troy L.

    2013-01-01

    The validation of ability scales describing multidimensional skills is always challenging, but not impossible. This study applies a multistage, criterion-referenced approach that uses a framework of aligned texts and reading tasks to explore the validity of the ACTFL and related reading proficiency guidelines. Rasch measurement and statistical…

  15. EVALUATION OF BACTERIOLOGICAL INDICATORS OF DISINFECTION FOR ALKALINE TREATED BIOSOLIDS

    EPA Science Inventory

    In the United States, treated municipal sludge, also known as biosolids, may be land applied with certain site restrictions. According to U.S. regulations a Class B biosolid is any biosolid that following appropriate treatment, meets the criterion of 2 million or less fecal coli...

  16. The Effect of a Variable Disc Pad Friction Coefficient for the Mechanical Brake System of a Railway Vehicle

    PubMed Central

    Lee, Nam-Jin; Kang, Chul-Goo

    2015-01-01

    A brake hardware-in-the-loop simulation (HILS) system for a railway vehicle is widely applied to estimate and validate braking performance in research studies and field tests. When we develop a simulation model for a full vehicle system, the characteristics of all components are generally properly simplified based on the understanding of each component’s purpose and interaction with other components. The friction coefficient between the brake disc and the pad used in simulations has been conventionally considered constant, and the effect of a variable friction coefficient is ignored with the assumption that the variability affects the performance of the vehicle braking very little. However, the friction coefficient of a disc pad changes significantly within a range due to environmental conditions, and thus, the friction coefficient can affect the performance of the brakes considerably, especially on the wheel slide. In this paper, we apply a variable friction coefficient and analyzed the effects of the variable friction coefficient on a mechanical brake system of a railway vehicle. We introduce a mathematical formula for the variable friction coefficient in which the variable friction is represented by two variables and five parameters. The proposed formula is applied to real-time simulations using a brake HILS system, and the effectiveness of the formula is verified experimentally by testing the mechanical braking performance of the brake HILS system. PMID:26267883

  17. The Effect of a Variable Disc Pad Friction Coefficient for the Mechanical Brake System of a Railway Vehicle.

    PubMed

    Lee, Nam-Jin; Kang, Chul-Goo

    2015-01-01

    A brake hardware-in-the-loop simulation (HILS) system for a railway vehicle is widely applied to estimate and validate braking performance in research studies and field tests. When we develop a simulation model for a full vehicle system, the characteristics of all components are generally properly simplified based on the understanding of each component's purpose and interaction with other components. The friction coefficient between the brake disc and the pad used in simulations has been conventionally considered constant, and the effect of a variable friction coefficient is ignored with the assumption that the variability affects the performance of the vehicle braking very little. However, the friction coefficient of a disc pad changes significantly within a range due to environmental conditions, and thus, the friction coefficient can affect the performance of the brakes considerably, especially on the wheel slide. In this paper, we apply a variable friction coefficient and analyzed the effects of the variable friction coefficient on a mechanical brake system of a railway vehicle. We introduce a mathematical formula for the variable friction coefficient in which the variable friction is represented by two variables and five parameters. The proposed formula is applied to real-time simulations using a brake HILS system, and the effectiveness of the formula is verified experimentally by testing the mechanical braking performance of the brake HILS system.

  18. Scaling study of the combustion performance of gas—gas rocket injectors

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Wei; Cai, Guo-Biao; Jin, Ping

    2011-10-01

    To obtain the key subelements that may influence the scaling of gas—gas injector combustor performance, the combustion performance subelements in a liquid propellant rocket engine combustor are initially analysed based on the results of a previous study on the scaling of a gas—gas combustion flowfield. Analysis indicates that inner wall friction loss and heat-flux loss are two key issues in gaining the scaling criterion of the combustion performance. The similarity conditions of the inner wall friction loss and heat-flux loss in a gas—gas combustion chamber are obtained by theoretical analyses. Then the theoretical scaling criterion was obtained for the combustion performance, but it proved to be impractical. The criterion conditions, the wall friction and the heat flux are further analysed in detail to obtain the specific engineering scaling criterion of the combustion performance. The results indicate that when the inner flowfields in the combustors are similar, the combustor wall shear stress will have similar distributions qualitatively and will be directly proportional to pc0.8dt-0.2 quantitatively. In addition, the combustion peformance will remain unchanged. Furthermore, multi-element injector chambers with different geometric sizes and at different pressures are numerically simulated and the wall shear stress and combustion efficiencies are solved and compared with each other. A multielement injector chamber is designed and hot-fire tested at several chamber pressures and the combustion performances are measured in a total of nine hot-fire tests. The numerical and experimental results verified the similarities among combustor wall shear stress and combustion performances at different chamber pressures and geometries, with the criterion applied.

  19. Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.

    PubMed

    Schoenmaker, Esther; van de Par, Steven

    2016-01-01

    Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.

  20. A generic bio-economic farm model for environmental and economic assessment of agricultural systems.

    PubMed

    Janssen, Sander; Louhichi, Kamel; Kanellopoulos, Argyris; Zander, Peter; Flichman, Guillermo; Hengsdijk, Huib; Meuter, Eelco; Andersen, Erling; Belhouchette, Hatem; Blanco, Maria; Borkowski, Nina; Heckelei, Thomas; Hecker, Martin; Li, Hongtao; Oude Lansink, Alfons; Stokstad, Grete; Thorne, Peter; van Keulen, Herman; van Ittersum, Martin K

    2010-12-01

    Bio-economic farm models are tools to evaluate ex-post or to assess ex-ante the impact of policy and technology change on agriculture, economics and environment. Recently, various BEFMs have been developed, often for one purpose or location, but hardly any of these models are re-used later for other purposes or locations. The Farm System Simulator (FSSIM) provides a generic framework enabling the application of BEFMs under various situations and for different purposes (generating supply response functions and detailed regional or farm type assessments). FSSIM is set up as a component-based framework with components representing farmer objectives, risk, calibration, policies, current activities, alternative activities and different types of activities (e.g., annual and perennial cropping and livestock). The generic nature of FSSIM is evaluated using five criteria by examining its applications. FSSIM has been applied for different climate zones and soil types (criterion 1) and to a range of different farm types (criterion 2) with different specializations, intensities and sizes. In most applications FSSIM has been used to assess the effects of policy changes and in two applications to assess the impact of technological innovations (criterion 3). In the various applications, different data sources, level of detail (e.g., criterion 4) and model configurations have been used. FSSIM has been linked to an economic and several biophysical models (criterion 5). The model is available for applications to other conditions and research issues, and it is open to be further tested and to be extended with new components, indicators or linkages to other models.

  1. A Generic Bio-Economic Farm Model for Environmental and Economic Assessment of Agricultural Systems

    PubMed Central

    Louhichi, Kamel; Kanellopoulos, Argyris; Zander, Peter; Flichman, Guillermo; Hengsdijk, Huib; Meuter, Eelco; Andersen, Erling; Belhouchette, Hatem; Blanco, Maria; Borkowski, Nina; Heckelei, Thomas; Hecker, Martin; Li, Hongtao; Oude Lansink, Alfons; Stokstad, Grete; Thorne, Peter; van Keulen, Herman; van Ittersum, Martin K.

    2010-01-01

    Bio-economic farm models are tools to evaluate ex-post or to assess ex-ante the impact of policy and technology change on agriculture, economics and environment. Recently, various BEFMs have been developed, often for one purpose or location, but hardly any of these models are re-used later for other purposes or locations. The Farm System Simulator (FSSIM) provides a generic framework enabling the application of BEFMs under various situations and for different purposes (generating supply response functions and detailed regional or farm type assessments). FSSIM is set up as a component-based framework with components representing farmer objectives, risk, calibration, policies, current activities, alternative activities and different types of activities (e.g., annual and perennial cropping and livestock). The generic nature of FSSIM is evaluated using five criteria by examining its applications. FSSIM has been applied for different climate zones and soil types (criterion 1) and to a range of different farm types (criterion 2) with different specializations, intensities and sizes. In most applications FSSIM has been used to assess the effects of policy changes and in two applications to assess the impact of technological innovations (criterion 3). In the various applications, different data sources, level of detail (e.g., criterion 4) and model configurations have been used. FSSIM has been linked to an economic and several biophysical models (criterion 5). The model is available for applications to other conditions and research issues, and it is open to be further tested and to be extended with new components, indicators or linkages to other models. PMID:21113782

  2. Coefficient Alpha: A Reliability Coefficient for the 21st Century?

    ERIC Educational Resources Information Center

    Yang, Yanyun; Green, Samuel B.

    2011-01-01

    Coefficient alpha is almost universally applied to assess reliability of scales in psychology. We argue that researchers should consider alternatives to coefficient alpha. Our preference is for structural equation modeling (SEM) estimates of reliability because they are informative and allow for an empirical evaluation of the assumptions…

  3. Striking Distance Determined From High-Speed Videos and Measured Currents in Negative Cloud-to-Ground Lightning

    NASA Astrophysics Data System (ADS)

    Visacro, Silverio; Guimaraes, Miguel; Murta Vale, Maria Helena

    2017-12-01

    First and subsequent return strokes' striking distances (SDs) were determined for negative cloud-to-ground flashes from high-speed videos exhibiting the development of positive and negative leaders and the pre-return stroke phase of currents measured along a short tower. In order to improve the results, a new criterion was used for the initiation and propagation of the sustained upward connecting leader, consisting of a 4 A continuous current threshold. An advanced approach developed from the combined use of this criterion and a reverse propagation procedure, which considers the calculated propagation speeds of the leaders, was applied and revealed that SDs determined solely from the first video frame showing the upward leader can be significantly underestimated. An original approach was proposed for a rough estimate of first strokes' SD using solely records of current. This approach combines the 4 A criterion and a representative composite three-dimensional propagation speed of 0.34 × 106 m/s for the leaders in the last 300 m propagated distance. SDs determined under this approach showed to be consistent with those of the advanced procedure. This approach was applied to determine the SD of 17 first return strokes of negative flashes measured at MCS, covering a wide peak-current range, from 18 to 153 kA. The estimated SDs exhibit very high dispersion and reveal great differences in relation to the SDs estimated for subsequent return strokes and strokes in triggered lightning.

  4. Construct and Criterion Validity of the PedsQL™ 4.0 Instrument (Pediatric Quality of Life Inventory) in Colombia.

    PubMed

    Amaya-Arias, Ana Carolina; Alzate, Juan Pablo; Eslava-Schmalbach, Javier H

    2017-01-01

    This study aimed at determining the validity of the Pediatric Quality of Life Inventory 4.0 (PedsQL™ 4.0) for the measurement of health-related quality of life (HRQOL) in Colombian children. Validation study of measurement instruments. The PedsQL™ 4.0 was applied by convenience sampling to 375 pairs of children and adolescents between the ages of 5 and 17 and to their parents-caregivers, as well as to 125 parents-caregivers of children between the ages of 2 and 4 in five cities of Colombia (Bogota, Medellin, Cali, Barranquilla and Bucaramanga). Construct validity was assessed through the use of exploratory and confirmatory factor analysis, and criterion validity was assessed by correlations between the PedsQL™ 4.0 and the KIDSCREEN-27. The instrument was applied to 375 children (ages 5-18) and 125 parents of children between the ages of 2 and 4. Factor analysis revealed four factors considered suitable for the sample in both the child and parent reports, whereas Bartlett's test of sphericity showed inter-correlation between variables. Scale and subscales showed proper indicators of internal consistency. It is recommended not to include or review some of the items in the Colombian version of the scale. The Spanish version for Colombia of the PedsQL™ 4.0 displays suitable indicators of criterion and construct validity, therefore becoming a valuable tool for measuring HRQOL in children in our country. Some modifications are recommended for the Colombian version of the scale.

  5. Some methodological aspects of tectonic stress reconstruction based on geological indicators

    NASA Astrophysics Data System (ADS)

    Sim, Lidiya A.

    2012-03-01

    The impact of the initial heterogeneity in rocks on the distribution of slickensides (slip planes), which are used to reconstruct local stress-state, is discussed. The stress-state was reconstructed by a graphic variant of the cinematic method. A new type of stress-state - Variation of Stress-State Type (VSST) - is introduced. The VSST is characterized by dislocation vectors originated both by uniaxial compression and uniaxial tension in the same rock volume, i.e. coefficient μσ = 1-2R varies from +1 to -1. The reasons for differences between slip planes from those predicted by the model are discussed. The distribution of slip planes depends on the stress-state type (μσ coefficient) and on the initial heterogeneity of the rocks. A criterion for the identification of tectonic stress rank is suggested. It is based on the models of tectonic stress distribution in the fracture vicinities. Examples of results of tectonic stress studies are given. Studies of tectonic stresses based on geological indicators are of methodological and practical importance.

  6. Effects of physical parameters on the cell-to-dendrite transition in directional solidification

    NASA Astrophysics Data System (ADS)

    Wei, Lei; Lin, Xin; Wang, Meng; Huang, Wei-Dong

    2015-07-01

    A quantitative cellular automaton model is used to study the cell-to-dendrite transition (CDT) in directional solidification. We give a detailed description of the CDT by carefully examining the influence of the physical parameters, including: the Gibbs-Thomson coefficient Γ, the solute diffusivity Dl, the solute partition coefficient k0, and the liquidus slope ml. It is found that most of the parameters agree with the Kurz and Fisher (KF) criterion, except for k0. The intrinsic relations among the critical velocity Vcd, the cellular primary spacing λc,max, and the critical spacing λcd are investigated. Project supported by the National Natural Science Foundation of China (Grant Nos. 51271213 and 51323008), the National Basic Research Program of China (Grant No. 2011CB610402), the National High Technology Research and Development Program of China (Grant No. 2013AA031103), the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20116102110016), and the China Postdoctoral Science Foundation (Grant No. 2013M540771).

  7. Molecular dynamics simulations of AP/HMX composite with a modified force field.

    PubMed

    Zhu, Wei; Wang, Xijun; Xiao, Jijun; Zhu, Weihua; Sun, Huai; Xiao, Heming

    2009-08-15

    An all-atom force field for ammonium perchlorate (AP) is developed with the framework of pcff force field. The structural parameters of AP obtained with the modified force field are in good agreement with experimental values. Molecular dynamics (MD) simulations have been performed to investigate AP/HMX (1,3,5,7-tetranitro-1,3,5,7-tetrazocane) composite at different temperatures. The binding energies, thermal expansion coefficient, and the trigger bond lengths of HMX in the AP/HMX composite have been obtained. The binding energies of the system increase slightly with temperature increasing, peak at 245K, and then gradually decrease. The volume thermal expansion coefficient of the AP/HMX composite has been derived from the volume variation with temperature. As the temperature rises, the maximal lengths of the trigger bond N-NO(2) of HMX increase gradually. The simulated results indicate that the maximal length of trigger bond can be used as a criterion for judging the sensitivity of energetic composite.

  8. Quantitative analysis of rectal cancer by spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Zhang, Q. Q.; Wu, X. J.; Tang, T.; Zhu, S. W.; Yao, Q.; Gao, Bruce Z.; Yuan, X. C.

    2012-08-01

    To quantify OCT images of rectal tissue for clinic diagnosis, the scattering coefficient of the tissue is extracted by curve fitting the OCT signals to a confocal single model. A total of 1000 measurements (half and half of normal and malignant tissues) were obtained from 16 recta. The normal rectal tissue has a larger scattering coefficient ranging from 1.09 to 5.41 mm-1 with a mean value of 2.29 mm-1 (std:±0.32), while the malignant group shows lower scattering property and the values ranging from 0.25 to 2.69 mm-1 with a mean value of 1.41 mm-1 (std:±0.18). The peri-cancer of recta has also been investigated to distinguish the difference between normal and malignant rectal tissue. The results demonstrate that the quantitative analysis of the rectal tissue can be used as a promising diagnostic criterion of early rectal cancer, which has great value for clinical medical applications.

  9. New Correlation Methods of Evaporation Heat Transfer in Horizontal Microfine Tubes

    NASA Astrophysics Data System (ADS)

    Makishi, Osamu; Honda, Hiroshi

    A stratified flow model and an annular flow model of evaporation heat transfer in horizontal microfin tubes have been proposed. In the stratified flow model, the contributions of thin film evaporation and nucleate boiling in the groove above a stratified liquid were predicted by a previously reported numerical analysis and a newly developed correlation, respectively. The contributions of nucleate boiling and forced convection in the stratified liquid region were predicted by the new correlation and the Carnavos equation, respectively. In the annular flow model, the contributions of nucleate boiling and forced convection were predicted by the new correlation and the Carnavos equation in which the equivalent Reynolds number was introduced, respectively. A flow pattern transition criterion proposed by Kattan et al. was incorporated to predict the circumferential average heat transfer coefficient in the intermediate region by use of the two models. The predictions of the heat transfer coefficient compared well with available experimental data for ten tubes and four refrigerants.

  10. Cross-cultural validity of a dietary questionnaire for studies of dental caries risk in Japanese.

    PubMed

    Shinga-Ishihara, Chikako; Nakai, Yukie; Milgrom, Peter; Murakami, Kaori; Matsumoto-Nakano, Michiyo

    2014-01-02

    Diet is a major modifiable contributing factor in the etiology of dental caries. The purpose of this paper is to examine the reliability and cross-cultural validity of the Japanese version of the Food Frequency Questionnaire to assess dietary intake in relation to dental caries risk in Japanese. The 38-item Food Frequency Questionnaire, in which Japanese food items were added to increase content validity, was translated into Japanese, and administered to two samples. The first sample comprised 355 pregnant women with mean age of 29.2 ± 4.2 years for the internal consistency and criterion validity analyses. Factor analysis (principal components with Varimax rotation) was used to determine dimensionality. The dietary cariogenicity score was calculated from the Food Frequency Questionnaire and used for the analyses. Salivary mutans streptococci level was used as a semi-quantitative assessment of dental caries risk and measured by Dentocult SM. Dentocult SM scores were compared with the dietary cariogenicity score computed from the Food Frequency Questionnaire to examine criterion validity, and assessed by Spearman's correlation coefficient (rs) and Kruskal-Wallis test. Test-retest reliability of the Food Frequency Questionnaire was assessed with a second sample of 25 adults with mean age of 34.0 ± 3.0 years by using the intraclass correlation coefficient analysis. The Japanese language version of the Food Frequency Questionnaire showed high test-retest reliability (ICC = 0.70) and good criterion validity assessed by relationship with salivary mutans streptococci levels (rs = 0.22; p < 0.001). Factor analysis revealed four subscales that construct the questionnaire (solid sugars, solid and starchy sugars, liquid and semisolid sugars, sticky and slowly dissolving sugars). Internal consistency were low to acceptable (Cronbach's alpha = 0.67 for the total scale, 0.46-0.61 for each subscale). Mean dietary cariogenicity scores were 50.8 ± 19.5 in the first sample, 47.4 ± 14.1, and 40.6 ± 11.3 for the first and second administrations in the second sample. The distribution of Dentocult SM score was 6.8% (score = 0), 34.4% (score = 1), 39.4% (score = 2), and 19.4% (score = 3). Participants with higher scores were more likely to have higher dietary cariogenicity scores (p < 0.001; Kruskal-Wallis test). These results provide the preliminary evidence for the reliability and validity of the Japanese language Food Frequency Questionnaire.

  11. Evaluation of the Weather Research and Forecasting (WRF) Model over Portugal: Case study

    NASA Astrophysics Data System (ADS)

    Rodrigues, Mónica; Rocha, Alfredo; Monteiro, Ana

    2013-04-01

    Established in 1756 the Demarcated Douro Region, became the first viticulturist region to be delimited and regulated under worldwide scale. The region has an area of 250000 hectares, from which 45000 are occupied by continuous vineyards (IVDP, 2010). It stretches along the Douro river valleys and its main streams, from the region of Mesão Frio, about 100 kilometers east from Porto town where this river discharges till attaining the frontier with Spain in the east border. Due to its stretching and extension in the W-E direction accompanying the Douro Valley, it is not strange that the region is not homogeneous having, therefore, three sub-regions: Baixo Corgo, Cima Corgo and Douro Superior. The Baixo Corgo the most western region is the "birthplace" of the viticulturalist region. The main purpose of this work is to evaluate and test the quality of a criterion developed to determine the occurrence of frost. This criterion is to be used latter by numerical weather forecasts (WRF-ARW) and put into practice in 16 meteorological stations in the Demarcated Douro Region. Firstly, the criterion was developed to calculate the occurrence of frost based on the meteorological data observed in those 16 stations. Time series of temperatures and precipitation were used for a period of approximately 20 years. It was verified that the meteorological conditions associated to days with frost (SG) and without frost (CG) are different in each station. Afterwards, the model was validated, especially in what concerns the simulation of the daily minimal temperature. Correcting functions were applied to the data of the model, having considerably diminished the errors of simulation. Then the criterion of frost estimate was applied do the output of the model for a period of 2 frost seasons. The results show that WRF simulates successfully the appearance of frost episodes and so can be used in the frost forecasting.

  12. When is hub gene selection better than standard meta-analysis?

    PubMed

    Langfelder, Peter; Mischel, Paul S; Horvath, Steve

    2013-01-01

    Since hub nodes have been found to play important roles in many networks, highly connected hub genes are expected to play an important role in biology as well. However, the empirical evidence remains ambiguous. An open question is whether (or when) hub gene selection leads to more meaningful gene lists than a standard statistical analysis based on significance testing when analyzing genomic data sets (e.g., gene expression or DNA methylation data). Here we address this question for the special case when multiple genomic data sets are available. This is of great practical importance since for many research questions multiple data sets are publicly available. In this case, the data analyst can decide between a standard statistical approach (e.g., based on meta-analysis) and a co-expression network analysis approach that selects intramodular hubs in consensus modules. We assess the performance of these two types of approaches according to two criteria. The first criterion evaluates the biological insights gained and is relevant in basic research. The second criterion evaluates the validation success (reproducibility) in independent data sets and often applies in clinical diagnostic or prognostic applications. We compare meta-analysis with consensus network analysis based on weighted correlation network analysis (WGCNA) in three comprehensive and unbiased empirical studies: (1) Finding genes predictive of lung cancer survival, (2) finding methylation markers related to age, and (3) finding mouse genes related to total cholesterol. The results demonstrate that intramodular hub gene status with respect to consensus modules is more useful than a meta-analysis p-value when identifying biologically meaningful gene lists (reflecting criterion 1). However, standard meta-analysis methods perform as good as (if not better than) a consensus network approach in terms of validation success (criterion 2). The article also reports a comparison of meta-analysis techniques applied to gene expression data and presents novel R functions for carrying out consensus network analysis, network based screening, and meta analysis.

  13. Five-level emergency triage systems: variation in assessment of validity.

    PubMed

    Kuriyama, Akira; Urushidani, Seigo; Nakayama, Takeo

    2017-11-01

    Triage systems are scales developed to rate the degree of urgency among patients who arrive at EDs. A number of different scales are in use; however, the way in which they have been validated is inconsistent. Also, it is difficult to define a surrogate that accurately predicts urgency. This systematic review described reference standards and measures used in previous validation studies of five-level triage systems. We searched PubMed, EMBASE and CINAHL to identify studies that had assessed the validity of five-level triage systems and described the reference standards and measures applied in these studies. Studies were divided into those using criterion validity (reference standards developed by expert panels or triage systems already in use) and those using construct validity (prognosis, costs and resource use). A total of 57 studies examined criterion and construct validity of 14 five-level triage systems. Criterion validity was examined by evaluating (1) agreement between the assigned degree of urgency with objective standard criteria (12 studies), (2) overtriage and undertriage (9 studies) and (3) sensitivity and specificity of triage systems (7 studies). Construct validity was examined by looking at (4) the associations between the assigned degree of urgency and measures gauged in EDs (48 studies) and (5) the associations between the assigned degree of urgency and measures gauged after hospitalisation (13 studies). Particularly, among 46 validation studies of the most commonly used triages (Canadian Triage and Acuity Scale, Emergency Severity Index and Manchester Triage System), 13 and 39 studies examined criterion and construct validity, respectively. Previous studies applied various reference standards and measures to validate five-level triage systems. They either created their own reference standard or used a combination of severity/resource measures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. The 10/66 Dementia Research Group's fully operationalised DSM-IV dementia computerized diagnostic algorithm, compared with the 10/66 dementia algorithm and a clinician diagnosis: a population validation study

    PubMed Central

    Prince, Martin J; de Rodriguez, Juan Llibre; Noriega, L; Lopez, A; Acosta, Daisy; Albanese, Emiliano; Arizaga, Raul; Copeland, John RM; Dewey, Michael; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Krishnamoorthy, ES; McKeigue, Paul; Sousa, Renata; Stewart, Robert J; Salas, Aquiles; Sosa, Ana Luisa; Uwakwa, Richard

    2008-01-01

    Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study). Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder. PMID:18577205

  15. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    PubMed

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  16. 77 FR 74392 - Direct Grant Programs and Definitions That Apply to Department Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-14

    ... criterion ``Quality of the Project Design''; 6. Authorize program offices to consider the effectiveness of... service providers is often an important factor in designing a project and submitting a high-quality... assisted in designing the applicant's evaluation plan. Formal competition requirements also inhibit the...

  17. Optimal Admission to Higher Education

    ERIC Educational Resources Information Center

    Albaek, Karsten

    2017-01-01

    This paper analyses admission decisions when students from different high school tracks apply for admission to university programmes. I derive a criterion that is optimal in the sense that it maximizes the graduation rates of the university programmes. The paper contains an empirical analysis that documents the relevance of theory and illustrates…

  18. Constrained Subjective Assessment of Student Learning

    ERIC Educational Resources Information Center

    Saliu, Sokol

    2005-01-01

    Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a…

  19. Competency Tests and Graduation Requirements. Second Edition.

    ERIC Educational Resources Information Center

    Keefe, James W.

    Interest in applied performance testing and concern about the quality of the high school diploma are finding a common ground: graduation requirements. A competency is a complex capability applicable in real life situations, and can be used as program objectives in a competency-based, criterion-referenced program. In such a program, applied…

  20. New explicit global asymptotic stability criteria for higher order difference equations

    NASA Astrophysics Data System (ADS)

    El-Morshedy, Hassan A.

    2007-12-01

    New explicit sufficient conditions for the asymptotic stability of the zero solution of higher order difference equations are obtained. These criteria can be applied to autonomous and nonautonomous equations. The celebrated Clark asymptotic stability criterion is improved. Also, applications to models from mathematical biology and macroeconomics are given.

  1. The Influence of Level of Discrepancy on the Identification of Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    McLeskey, James

    1989-01-01

    Investigation of the relationship between a statistically determined severe discrepancy between expected and actual achievement levels and subsequent labeling of 733 students as learning disabled found only a slight majority of labeled students manifesting a severe discrepancy suggesting this criterion is inconsistently applied in making…

  2. The Schrodinger Eigenvalue March

    ERIC Educational Resources Information Center

    Tannous, C.; Langlois, J.

    2011-01-01

    A simple numerical method for the determination of Schrodinger equation eigenvalues is introduced. It is based on a marching process that starts from an arbitrary point, proceeds in two opposite directions simultaneously and stops after a tolerance criterion is met. The method is applied to solving several 1D potential problems including symmetric…

  3. Occupation-specific screening for future sickness absence: criterion validity of the trucker strain monitor (TSM).

    PubMed

    De Croon, Einar M; Blonk, Roland W B; Sluiter, Judith K; Frings-Dresen, Monique H W

    2005-02-01

    Monitoring psychological job strain may help occupational physicians to take preventive action at the appropriate time. For this purpose, the 10-item trucker strain monitor (TSM) assessing work-related fatigue and sleeping problems in truck drivers was developed. This study examined (1) test-retest reliability, (2) criterion validity of the TSM with respect to future sickness absence due to psychological health complaints and (3) usefulness of the TSM two-scales structure. The TSM and self-administered questionnaires, providing information about stressful working conditions (job control and job demands) and sickness absence, were sent to a random sample of 2000 drivers in 1998. Of the 1123 responders, 820 returned a completed questionnaire 2 years later (response: 72%). The TSM work-related fatigue scale, the TSM sleeping problems scale and the TSM composite scale showed satisfactory 2-year test-retest reliability (coefficient r=0.62, 0.66 and 0.67, respectively). The work-related fatigue, sleeping problems scale and composite scale had sensitivities of 61, 65 and 61%, respectively in identifying drivers with future sickness absence due to psychological health complaints. The specificity and positive predictive value of the TSM composite scale were 77 and 11%, respectively. The work-related fatigue scale and the sleeping problems scale were moderately strong correlated (r=0.62). However, stressful working conditions were differentially associated with the two scales. The results support the test-retest reliability, criterion validity and two-factor structure of the TSM. In general, the results suggest that the use of occupation-specific psychological job strain questionnaires is fruitful.

  4. The Arthroscopic Surgical Skill Evaluation Tool (ASSET).

    PubMed

    Koehler, Ryan J; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Bramen, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J; Nicandri, Gregg T

    2013-06-01

    Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice; however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability when used to assess the technical ability of surgeons performing diagnostic knee arthroscopic surgery on cadaveric specimens. Cross-sectional study; Level of evidence, 3. Content validity was determined by a group of 7 experts using the Delphi method. Intra-articular performance of a right and left diagnostic knee arthroscopic procedure was recorded for 28 residents and 2 sports medicine fellowship-trained attending surgeons. Surgeon performance was assessed by 2 blinded raters using the ASSET. Concurrent criterion-oriented validity, interrater reliability, and test-retest reliability were evaluated. Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in the total ASSET score (P < .05) between novice, intermediate, and advanced experience groups were identified. Interrater reliability: The ASSET scores assigned by each rater were strongly correlated (r = 0.91, P < .01), and the intraclass correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: There was a significant correlation between ASSET scores for both procedures attempted by each surgeon (r = 0.79, P < .01). The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopic surgery in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live operating room and other simulated environments.

  5. Validity and Reliability of the Upper Extremity Work Demands Scale.

    PubMed

    Jacobs, Nora W; Berduszek, Redmar J; Dijkstra, Pieter U; van der Sluis, Corry K

    2017-12-01

    Purpose To evaluate validity and reliability of the upper extremity work demands (UEWD) scale. Methods Participants from different levels of physical work demands, based on the Dictionary of Occupational Titles categories, were included. A historical database of 74 workers was added for factor analysis. Criterion validity was evaluated by comparing observed and self-reported UEWD scores. To assess structural validity, a factor analysis was executed. For reliability, the difference between two self-reported UEWD scores, the smallest detectable change (SDC), test-retest reliability and internal consistency were determined. Results Fifty-four participants were observed at work and 51 of them filled in the UEWD twice with a mean interval of 16.6 days (SD 3.3, range = 10-25 days). Criterion validity of the UEWD scale was moderate (r = .44, p = .001). Factor analysis revealed that 'force and posture' and 'repetition' subscales could be distinguished with Cronbach's alpha of .79 and .84, respectively. Reliability was good; there was no significant difference between repeated measurements. An SDC of 5.0 was found. Test-retest reliability was good (intraclass correlation coefficient for agreement = .84) and all item-total correlations were >.30. There were two pairs of highly related items. Conclusion Reliability of the UEWD scale was good, but criterion validity was moderate. Based on current results, a modified UEWD scale (2 items removed, 1 item reworded, divided into 2 subscales) was proposed. Since observation appeared to be an inappropriate gold standard, we advise to investigate other types of validity, such as construct validity, in further research.

  6. Cross-cultural adaptation and validation of the Ankle Osteoarthritis Scale for use in French-speaking populations.

    PubMed

    Angers, Magalie; Svotelis, Amy; Balg, Frederic; Allard, Jean-Pascal

    2016-04-01

    The Ankle Osteoarthritis Scale (AOS) is a self-administered score specific for ankle osteoarthritis (OA) with excellent reliability and strong construct and criterion validity. Many recent randomized multicentre trials have used the AOS, and the involvement of the French-speaking population is limited by the absence of a French version. Our goal was to develop a French version and validate the psychometric properties to assure equivalence to the original English version. Translation was performed according to American Association of Orthopaedic Surgeons (AAOS) 2000 guidelines for cross-cultural adaptation. Similar to the validation process of the English AOS, we evaluated the psychometric properties of the French version (AOS-Fr): criterion validity (AOS-Fr v. Western Ontario and McMaster Universities Arthritis Index [WOMAC] and SF-36 scores), construct validity (AOS-Fr correlation to single heel-lift test), and reliability (AOS-Fr test-retest). Sixty healthy individuals tested a prefinal version of the AOS-Fr for comprehension, leading to modifications and a final version that was approved by C. Saltzman, author of the AOS. We then recruited patients with ankle OA for evaluation of the AOS-Fr psychometric properties. Twenty-eight patients with ankle OA participated in the evaluation. The AOS-Fr showed strong criterion validity (AOS:WOMAC r = 0.709 and AOS:SF-36 r = -0.654) and construct validity (r = 0.664) and proved to be reliable (test-retest intraclass correlation coefficient = 0.922). The AOS-Fr is a reliable and valid score equivalent to the English version in terms of psychometric properties, thus is available for use in multicentre trials.

  7. Testing the Hip Abductor Muscle Strength of Older Persons Using a Handheld Dynamometer.

    PubMed

    Awwad, Daniel H; Buckley, Jonathan D; Thomson, Rebecca L; O'Connor, Matthew; Carbone, Tania A; Chehade, Mellick J

    2017-09-01

    To investigate the reliability of a clinically applicable method of dynamometry to assess and monitor hip abductor muscle strength in older persons. Bilateral isometric hip abductor muscle strength measured with a handheld dynamometer, patients supine with the contralateral hip positioned directly against a wall for stabilization. Reliability determined by comparing intra-assessor and inter-assessor results and comparison to a criterion standard (stabilized dynamometer with patients in the standing position). UniSA Nutritional Physiology Research Centre. Twenty-one patients older than 65 years were recruited from the Royal Adelaide Hospital. Intraclass correlation coefficients (ICCs), bias, and limits of agreement calculated to determine reliability. Intra-assessor and inter-assessor ICCs were high (0.94 and 0.92-0.94, respectively). There was no intra-assessor bias and narrow limits of agreement (±2.4%). There was a small inter-assessor bias but narrow limits of agreement (0.6%-0.9% and ± 2.3%, respectively). There was a wide variation comparing results to the criterion standard (±5.0%-5.2% limits of agreement), highlighting problems attributed to difficulties that the test population had with the standing position used in the criterion standard test. Testing older persons' hip abductor muscle strength while in the supine position with optimal pelvic stabilization using a handheld dynamometer is highly reliable. While further studies must be done to assess patients with specific pathologies, this test has potential application to monitor and evaluate the effects of surgical interventions and/or rehabilitation protocols for a variety of conditions affecting hip abductor function such as hip fractures and arthritis.

  8. Geometrical comparison of two protein structures using Wigner-D functions.

    PubMed

    Saberi Fathi, S M; White, Diana T; Tuszynski, Jack A

    2014-10-01

    In this article, we develop a quantitative comparison method for two arbitrary protein structures. This method uses a root-mean-square deviation characterization and employs a series expansion of the protein's shape function in terms of the Wigner-D functions to define a new criterion, which is called a "similarity value." We further demonstrate that the expansion coefficients for the shape function obtained with the help of the Wigner-D functions correspond to structure factors. Our method addresses the common problem of comparing two proteins with different numbers of atoms. We illustrate it with a worked example. © 2014 Wiley Periodicals, Inc.

  9. On the Boltzmann Equation with Stochastic Kinetic Transport: Global Existence of Renormalized Martingale Solutions

    NASA Astrophysics Data System (ADS)

    Punshon-Smith, Samuel; Smith, Scott

    2018-02-01

    This article studies the Cauchy problem for the Boltzmann equation with stochastic kinetic transport. Under a cut-off assumption on the collision kernel and a coloring hypothesis for the noise coefficients, we prove the global existence of renormalized (in the sense of DiPerna/Lions) martingale solutions to the Boltzmann equation for large initial data with finite mass, energy, and entropy. Our analysis includes a detailed study of weak martingale solutions to a class of linear stochastic kinetic equations. This study includes a criterion for renormalization, the weak closedness of the solution set, and tightness of velocity averages in {{L}1}.

  10. Effect of plasma spraying modes on material properties of internal combustion engine cylinder liners

    NASA Astrophysics Data System (ADS)

    Timokhova, O. M.; Burmistrova, O. N.; Sirina, E. A.; Timokhov, R. S.

    2018-03-01

    The paper analyses different methods of remanufacturing worn-out machine parts in order to get the best performance characteristics. One of the most promising of them is a plasma spraying method. The mathematical models presented in the paper are intended to anticipate the results of plasma spraying, its effect on the properties of the material of internal combustion engine cylinder liners under repair. The experimental data and research results have been computer processed with Statistica 10.0 software package. The pare correlation coefficient values (R) and F-statistic criterion are given to confirm the statistical properties and adequacy of obtained regression equations.

  11. Measuring monotony in two-dimensional samples

    NASA Astrophysics Data System (ADS)

    Kachapova, Farida; Kachapov, Ilias

    2010-04-01

    This note introduces a monotony coefficient as a new measure of the monotone dependence in a two-dimensional sample. Some properties of this measure are derived. In particular, it is shown that the absolute value of the monotony coefficient for a two-dimensional sample is between |r| and 1, where r is the Pearson's correlation coefficient for the sample; that the monotony coefficient equals 1 for any monotone increasing sample and equals -1 for any monotone decreasing sample. This article contains a few examples demonstrating that the monotony coefficient is a more accurate measure of the degree of monotone dependence for a non-linear relationship than the Pearson's, Spearman's and Kendall's correlation coefficients. The monotony coefficient is a tool that can be applied to samples in order to find dependencies between random variables; it is especially useful in finding couples of dependent variables in a big dataset of many variables. Undergraduate students in mathematics and science would benefit from learning and applying this measure of monotone dependence.

  12. Determination of the acoustoelastic coefficient for surface acoustic waves using dynamic acoustoelastography: an alternative to static strain.

    PubMed

    Ellwood, R; Stratoudaki, T; Sharples, S D; Clark, M; Somekh, M G

    2014-03-01

    The third-order elastic constants of a material are believed to be sensitive to residual stress, fatigue, and creep damage. The acoustoelastic coefficient is directly related to these third-order elastic constants. Several techniques have been developed to monitor the acoustoelastic coefficient using ultrasound. In this article, two techniques to impose stress on a sample are compared, one using the classical method of applying a static strain using a bending jig and the other applying a dynamic stress due to the presence of an acoustic wave. Results on aluminum samples are compared. Both techniques are found to produce similar values for the acoustoelastic coefficient. The dynamic strain technique however has the advantages that it can be applied to large, real world components, in situ, while ensuring the measurement takes place in the nondestructive, elastic regime.

  13. Volcano plots in analyzing differential expressions with mRNA microarrays.

    PubMed

    Li, Wentian

    2012-12-01

    A volcano plot displays unstandardized signal (e.g. log-fold-change) against noise-adjusted/standardized signal (e.g. t-statistic or -log(10)(p-value) from the t-test). We review the basic and interactive use of the volcano plot and its crucial role in understanding the regularized t-statistic. The joint filtering gene selection criterion based on regularized statistics has a curved discriminant line in the volcano plot, as compared to the two perpendicular lines for the "double filtering" criterion. This review attempts to provide a unifying framework for discussions on alternative measures of differential expression, improved methods for estimating variance, and visual display of a microarray analysis result. We also discuss the possibility of applying volcano plots to other fields beyond microarray.

  14. Deciding Termination for Ancestor Match- Bounded String Rewriting Systems

    NASA Technical Reports Server (NTRS)

    Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes

    2005-01-01

    Termination of a string rewriting system can be characterized by termination on suitable recursively defined languages. This kind of termination criteria has been criticized for its lack of automation. In an earlier paper we have shown how to construct an automated termination criterion if the recursion is aligned with the rewrite relation. We have demonstrated the technique with Dershowitz's forward closure criterion. In this paper we show that a different approach is suitable when the recursion is aligned with the inverse of the rewrite relation. We apply this idea to Kurth's ancestor graphs and obtain ancestor match-bounded string rewriting systems. Termination is shown to be decidable for this class. The resulting method improves upon those based on match-boundedness or inverse match-boundedness.

  15. Matrix cracking in laminated composites under monotonic and cyclic loadings

    NASA Technical Reports Server (NTRS)

    Allen, David H.; Lee, Jong-Won

    1991-01-01

    An analytical model based on the internal state variable (ISV) concept and the strain energy method is proposed for characterizing the monotonic and cyclic response of laminated composites containing matrix cracks. A modified constitution is formulated for angle-ply laminates under general in-plane mechanical loading and constant temperature change. A monotonic matrix cracking criterion is developed for predicting the crack density in cross-ply laminates as a function of the applied laminate axial stress. An initial formulation for a cyclic matrix cracking criterion for cross-ply laminates is also discussed. For the monotonic loading case, a number of experimental data and well-known models are compared with the present study for validating the practical applicability of the ISV approach.

  16. Delamination micromechanics analysis

    NASA Technical Reports Server (NTRS)

    Adams, D. F.; Mahishi, J. M.

    1985-01-01

    A three-dimensional finite element analysis was developed which includes elastoplastic, orthotropic material response, and fracture initiation and propagation. Energy absorption due to physical failure processes characteristic of the heterogeneous and anisotropic nature of composite materials is modeled. A local energy release rate in the presence of plasticity was defined and used as a criterion to predict the onset and growth of cracks in both micromechanics and macromechanics analyses. This crack growth simulation technique is based upon a virtual crack extension method. A three-dimensional finite element micromechanics model is used to study the effects of broken fibers, cracked matrix and fiber-matrix debond on the fracture toughness of the unidirectional composite. The energy release rates at the onset of unstable crack growth in the micromechanics analyses are used as critical energy release rates in the macromechanics analysis. This integrated micromechanical and macromechanical fracture criterion is shown to be very effective in predicting the onset and growth of cracks in general multilayered composite laminates by applying the criterion to a single-edge notched graphite/epoxy laminate subjected to implane tension normal to the notch.

  17. Hierarchical semi-numeric method for pairwise fuzzy group decision making.

    PubMed

    Marimin, M; Umano, M; Hatono, I; Tamura, H

    2002-01-01

    Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.

  18. Likelihood of Alfvénic instability bifurcation in experiments

    NASA Astrophysics Data System (ADS)

    Duarte, Vinicius; Gorelenkov, Nikolai; Schneller, Mirjam; Fredrickson, Eric; Berk, Herbert; Canal, Gustavo; Heidbrink, William; Kaye, Stanley; Podesta, Mario; van Zeeland, Michael; Wang, Weixing

    2017-10-01

    We apply a criterion for the likely nature of fast ion redistribution in tokamaks to be in the convective or diffusive nonlinear regimes. The criterion, which is shown to be rather sensitive to the relative strength of collisional or micro-turbulent scattering and drag processes, ultimately translates into a condition for the applicability of reduced quasilinear modeling for realistic tokamak eigenmodes scenarios. The criterion is tested and validated against different machines, where the chirping mode behavior is shown to be in accord with the model. It has been found that the anomalous fast ion transport is a likely mediator of the bifurcation between the fixed-frequency mode behavior and rapid chirping in tokamaks. In addition, micro-turbulence appears to resolve the disparity with respect to the ubiquitous chirping observation in spherical tokamaks and its rarer occurrence in conventional tokamaks. In NSTX, the tendency for chirping is further studied in terms of the beam beta and the plasma rotation shear. For more accurate quantitative assessment, numerical simulations of the effects of electrostatic ion temperature gradient turbulence on chirping are presently being pursued using the GTS code.

  19. Assessment of selenium effects in lotic ecosystems

    USGS Publications Warehouse

    Hamilton, Steven J.; Palace, Vince

    2001-01-01

    The selenium literature has grown substantially in recent years to encompass new information in a variety of areas. Correspondingly, several different approaches to establishing a new water quality criterion for selenium have been proposed since establishment of the national water quality criterion in 1987. Diverging viewpoints and interpretations of the selenium literature have lead to opposing perspectives on issues such as establishing a national criterion based on a sediment-based model, using hydrologic units to set criteria for stream reaches, and applying lentic-derived effects to lotic environments. This Commentary presents information on the lotic verse lentic controversy. Recently, an article was published that concluded that no adverse effects were occurring in a cutthroat trout population in a coldwater river with elevated selenium concentrations (C. J. Kennedy, L. E. McDonald, R. Loveridge, and M. M. Strosher, 2000, Arch. Environ. Contam. Toxicol. 39, 46–52). This article has added to the controversy rather than provided further insight into selenium toxicology. Information, or rather missing information, in the article has been critically reviewed and problems in the interpretations are discussed.

  20. Methods of evaluating the effects of coding on SAR data

    NASA Technical Reports Server (NTRS)

    Dutkiewicz, Melanie; Cumming, Ian

    1993-01-01

    It is recognized that mean square error (MSE) is not a sufficient criterion for determining the acceptability of an image reconstructed from data that has been compressed and decompressed using an encoding algorithm. In the case of Synthetic Aperture Radar (SAR) data, it is also deemed to be insufficient to display the reconstructed image (and perhaps error image) alongside the original and make a (subjective) judgment as to the quality of the reconstructed data. In this paper we suggest a number of additional evaluation criteria which we feel should be included as evaluation metrics in SAR data encoding experiments. These criteria have been specifically chosen to provide a means of ensuring that the important information in the SAR data is preserved. The paper also presents the results of an investigation into the effects of coding on SAR data fidelity when the coding is applied in (1) the signal data domain, and (2) the image domain. An analysis of the results highlights the shortcomings of the MSE criterion, and shows which of the suggested additional criterion have been found to be most important.

  1. Building a maintenance policy through a multi-criterion decision-making model

    NASA Astrophysics Data System (ADS)

    Faghihinia, Elahe; Mollaverdi, Naser

    2012-08-01

    A major competitive advantage of production and service systems is establishing a proper maintenance policy. Therefore, maintenance managers should make maintenance decisions that best fit their systems. Multi-criterion decision-making methods can take into account a number of aspects associated with the competitiveness factors of a system. This paper presents a multi-criterion decision-aided maintenance model with three criteria that have more influence on decision making: reliability, maintenance cost, and maintenance downtime. The Bayesian approach has been applied to confront maintenance failure data shortage. Therefore, the model seeks to make the best compromise between these three criteria and establish replacement intervals using Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE II), integrating the Bayesian approach with regard to the preference of the decision maker to the problem. Finally, using a numerical application, the model has been illustrated, and for a visual realization and an illustrative sensitivity analysis, PROMETHEE GAIA (the visual interactive module) has been used. Use of PROMETHEE II and PROMETHEE GAIA has been made with Decision Lab software. A sensitivity analysis has been made to verify the robustness of certain parameters of the model.

  2. Clinical validity of prototype personality disorder ratings in adolescents.

    PubMed

    Defife, Jared A; Haggerty, Greg; Smith, Scott W; Betancourt, Luis; Ahmed, Zain; Ditkowsky, Keith

    2015-01-01

    A growing body of research shows that personality pathology in adolescents is clinically distinctive and frequently stable into adulthood. A reliable and useful method for rating personality pathology in adolescent patients has the potential to enhance conceptualization, dissemination, and treatment effectiveness. The aim of this study is to examine the clinical validity of a prototype matching approach (derived from the Shedler Westen Assessment Procedure-Adolescent Version) for quantifying personality pathology in an adolescent inpatient sample. Sixty-six adolescent inpatients and their parents or legal guardians completed forms of the Child Behavior Checklist (CBCL) assessing emotional and behavioral problems. Clinical criterion variables including suicide history, substance use, and fights with peers were also assessed. Patients' individual and group therapists on the inpatient unit completed personality prototype ratings. Prototype diagnoses demonstrated substantial reliability (median intraclass correlation coefficient =.75) across independent ratings from individual and group therapists. Personality prototype ratings correlated with the CBCL scales and clinical criterion variables in anticipated and meaningful ways. As seen in prior research with adult samples, prototype personality ratings show clinical validity across independent clinician raters previously unfamiliar with the approach, and they are meaningfully related to clinical symptoms, behavioral problems, and adaptive functioning.

  3. Forecasting typhoid fever incidence in the Cordillera administrative region in the Philippines using seasonal ARIMA models

    NASA Astrophysics Data System (ADS)

    Cawiding, Olive R.; Natividad, Gina May R.; Bato, Crisostomo V.; Addawe, Rizavel C.

    2017-11-01

    The prevalence of typhoid fever in developing countries such as the Philippines calls for a need for accurate forecasting of the disease. This will be of great assistance in strategic disease prevention. This paper presents a development of useful models that predict the behavior of typhoid fever incidence based on the monthly incidence in the provinces of the Cordillera Administrative Region from 2010 to 2015 using univariate time series analysis. The data used was obtained from the Cordillera Office of the Department of Health (DOH-CAR). Seasonal autoregressive moving average (SARIMA) models were used to incorporate the seasonality of the data. A comparison of the results of the obtained models revealed that the SARIMA (1,1,7)(0,0,1)12 with a fixed coefficient at the seventh lag produces the smallest root mean square error (RMSE), mean absolute error (MAE), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC). The model suggested that for the year 2016, the number of cases would increase from the months of July to September and have a drop in December. This was then validated using the data collected from January 2016 to December 2016.

  4. Prediction of Central Burst Defects in Copper Wire Drawing Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, G.; NEXANS France, NMC Nexans Metallurgy Centre, Boulevard du Marais, BP39, F-62301 Lens; Haddi, A.

    2011-01-17

    In this study, the prediction of chevron cracks (central bursts) in copper wire drawing process is investigated using experimental and numerical approaches. The conditions of the chevron cracks creation along the wire axis depend on (i) the die angle, the friction coefficient between the die and the wire, (ii) the reduction in crosssectional area of the wire, (iii) the material properties and (iv) the drawing velocity or strain rate. Under various drawing conditions, a numerical simulation for the prediction of central burst defects is presented using an axisymmetric finite element model. This model is based on the application of themore » Cockcroft and Latham fracture criterion. This criterion was used as the damage value to estimate if and where defects will occur during the copper wire drawing. The critical damage value of the material is obtained from a uniaxial tensile test. The results show that the die angle and the reduction ratio have a significant effect on the stress distribution and the maximum damage value. The central bursts are expected to occur when the die angle and reduction ratio reach a critical value. Numerical predictions are compared with experimental observations.« less

  5. Calibration of DEM parameters on shear test experiments using Kriging method

    NASA Astrophysics Data System (ADS)

    Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier

    2017-06-01

    Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.

  6. [Internal consistency and criterion validity and reliability of the Mexican Version of the Child Behavior Checklist 1.5-5 (CBCL/1.5-5)].

    PubMed

    Albores-Gallo, Lilia; Hernández-Guzmán, Laura; Hasfura-Buenaga, Cecilia; Navarro-Luna, Enrique

    To investigate the validity and internal consistency of the Mexican version of the CBCL/1.5 -5 that assesses the most common psychopathology in pre-school children in clinical and epidemiological settings. A total of 438 parents from two groups, clinical-psychiatric (N= 62) and community (N= 376) completed the CBCL/1.5-5/Mexican version. The internal consistency was high for total problems α=0.95, and internalized α=0.89 and externalized α=0.91 subscales. The test re-test (one week) using the intraclass correlation coefficient (ICC) was ≥ 0.95 for the internalized, externalized, and total problems subscales. The ROC curve for the criterion status of clinically-referred vs. non-referred using the total problems scale ≥ 24 resulted in an AUC (area under curve) of 0.77, a specificity 0.73, and a sensitivity of 0.70. The CBCL/1.5 -5/Mexican version is a reliable and valid tool. Copyright © 2016 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. The elastic stability, bifurcation and ideal strength of gold under hydrostatic stress: an ab initio calculation.

    PubMed

    Wang, Hao; Li, Mo

    2009-11-11

    In this paper, we employ an ab initio density functional theory calculation to investigate the elastic stability of face-centered cubic Au under hydrostatic deformation. We identify the elastic stiffness constant B(ijkl) as the coefficient in the stress-strain relation for an arbitrary deformed state, and use it to test the stability condition. We show that this criterion bears the same physics as that proposed earlier by Frenkel and Orowan and agrees with the Born-Hill criterion. The results from those two approaches agree well with each other. We show that the stability limit, or instability, of the perfect Au crystal under hydrostatic expansion is not associated with the bulk stiffness modulus as predicted in the previous work; rather it is caused by a shear instability associated with the vanishing rhombohedral shear stiffness modulus. The deviation of the deformation mode from the primary hydrostatic loading path signals a bifurcation or symmetry breaking in the ideal crystal. The corresponding ideal hydrostatic strength for Au is 19.2 GPa at the Lagrangian expansion strain of ∼0.06. In the case of compression, Au remains stable over the entire pressure range in our calculation.

  8. Numerical investigation of galloping instabilities in Z-shaped profiles.

    PubMed

    Gomez, Ignacio; Chavez, Miguel; Alonso, Gustavo; Valero, Eusebio

    2014-01-01

    Aeroelastic effects are relatively common in the design of modern civil constructions such as office blocks, airport terminal buildings, and factories. Typical flexible structures exposed to the action of wind are shading devices, normally slats or louvers. A typical cross-section for such elements is a Z-shaped profile, made out of a central web and two-side wings. Galloping instabilities are often determined in practice using the Glauert-Den Hartog criterion. This criterion relies on accurate predictions of the dependence of the aerodynamic force coefficients with the angle of attack. The results of a parametric analysis based on a numerical analysis and performed on different Z-shaped louvers to determine translational galloping instability regions are presented in this paper. These numerical analysis results have been validated with a parametric analysis of Z-shaped profiles based on static wind tunnel tests. In order to perform this validation, the DLR TAU Code, which is a standard code within the European aeronautical industry, has been used. This study highlights the focus on the numerical prediction of the effect of galloping, which is shown in a visible way, through stability maps. Comparisons between numerical and experimental data are presented with respect to various meshes and turbulence models.

  9. Psychometric properties and differential explanation of a short measure of effort-reward imbalance at work: a study of industrial workers in Germany.

    PubMed

    Li, Jian; Loerbroks, Adrian; Jarczok, Marc N; Schöllgen, Ina; Bosch, Jos A; Mauss, Daniel; Siegrist, Johannes; Fischer, Joachim E

    2012-09-01

    We test the psychometric properties of a short version of the Effort-Reward Imbalance (ERI) questionnaire in addition to testing an interaction term of this model's main components on health functioning. A self-administered survey was conducted in a sample of 2,738 industrial workers (77% men with mean age 41.6 years) from a large manufacturing company in Southern Germany. The internal consistency reliability, structural validity, and criterion validity were analyzed. Satisfactory internal consistencies of the three scales: "Effort", "reward", and "overcommitment", were obtained (Cronbach's alpha coefficients 0.77, 0.82, and 0.83, respectively). Confirmatory factor analysis showed a good model fit of the data with the theoretical structure (AGFI = 0.94, RMSEA = 0.060). Evidence of criterion validity was demonstrated. Importantly, a significant synergistic interaction effect of ERI and overcommitment on poor mental health functioning was observed (odds ratio 6.74 (95% CI 5.32-8.52); synergy index 1.78 (95% CI 1.25-2.55)). This short version of the ERI questionnaire is a reliable and valid tool for epidemiological research on occupational health. Copyright © 2012 Wiley Periodicals, Inc.

  10. Multispectral image fusion for illumination-invariant palmprint recognition

    PubMed Central

    Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064

  11. Multispectral image fusion for illumination-invariant palmprint recognition.

    PubMed

    Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.

  12. Assessment scale of risk for surgical positioning injuries 1

    PubMed Central

    Lopes, Camila Mendonça de Moraes; Haas, Vanderlei José; Dantas, Rosana Aparecida Spadoti; de Oliveira, Cheila Gonçalves; Galvão, Cristina Maria

    2016-01-01

    ABSTRACT Objective: to build and validate a scale to assess the risk of surgical positioning injuries in adult patients. Method: methodological research, conducted in two phases: construction and face and content validation of the scale and field research, involving 115 patients. Results: the Risk Assessment Scale for the Development of Injuries due to Surgical Positioning contains seven items, each of which presents five subitems. The scale score ranges between seven and 35 points in which, the higher the score, the higher the patient's risk. The Content Validity Index of the scale corresponded to 0.88. The application of Student's t-test for equality of means revealed the concurrent criterion validity between the scores on the Braden scale and the constructed scale. To assess the predictive criterion validity, the association was tested between the presence of pain deriving from surgical positioning and the development of pressure ulcer, using the score on the Risk Assessment Scale for the Development of Injuries due to Surgical Positioning (p<0.001). The interrater reliability was verified using the intraclass correlation coefficient, equal to 0.99 (p<0.001). Conclusion: the scale is a valid and reliable tool, but further research is needed to assess its use in clinical practice. PMID:27579925

  13. Assessment of a condition-specific quality-of-life measure for patients with developmentally absent teeth: validity and reliability testing.

    PubMed

    Akram, A J; Ireland, A J; Postlethwaite, K C; Sandy, J R; Jerreat, A S

    2013-11-01

    This article describes the process of validity and reliability testing of a condition-specific quality-of-life measure for patients with hypodontia presenting for orthodontic treatment. The development of the instrument is described in a previous article. Royal Devon and Exeter NHS Foundation Trust & Musgrove Park Hospital, Taunton. The child perception questionnaire was used as a standard against which to test criterion validity. The Bland and Altman method was used to check agreement between the two questionnaires. Construct validity was tested using principal component analysis on the four sections of the questionnaire. Test-retest reliability was tested using intraclass correlation coefficient and Bland and Altman method. Cronbach's alpha was used to test internal consistency reliability. Overall the questionnaire showed good reliability, criterion and construct validity. This together with previous evidence of good face and content validity suggests that the instrument may prove useful in clinical practice and further research. This study has demonstrated that the newly developed condition-specific quality-of-life questionnaire is both valid and reliable for use in young patients with hypodontia. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.

  14. Music therapy career aptitude test.

    PubMed

    Lim, Hayoung A

    2011-01-01

    The purpose of the Music Therapy Career Aptitude Test (MTCAT) was to measure the affective domain of music therapy students including their self-awareness as it relates to the music therapy career, value in human development, interest in general therapy, and aptitude for being a professional music therapist. The MTCAT was administered to 113 music therapy students who are currently freshman or sophomores in an undergraduate music therapy program or in the first year of a music therapy master's equivalency program. The results of analysis indicated that the MTCAT is normally distributed and that all 20 questions are significantly correlated with the total test score of the MTCAT. The reliability of the MTCAT was considerably high (Cronbach's Coefficient Alpha=0.8). The criterion-related validity was examined by comparing the MTCAT scores of music therapy students with the scores of 43 professional music therapists. The correlation between the scores of students and professionals was found to be statistically significant. The results suggests that normal distribution, internal consistency, homogeneity of construct, item discrimination, correlation analysis, content validity, and criterion-related validity in the MTCAT may be helpful in predicting music therapy career aptitude and may aid in the career decision making process of college music therapy students.

  15. Effect of ultrasound pre-treatment on the drying kinetics of brown seaweed Ascophyllum nodosum.

    PubMed

    Kadam, Shekhar U; Tiwari, Brijesh K; O'Donnell, Colm P

    2015-03-01

    The effect of ultrasound pre-treatment on the drying kinetics of brown seaweed Ascophyllum nodosum under hot-air convective drying was investigated. Pretreatments were carried out at ultrasound intensity levels ranging from 7.00 to 75.78 Wcm(-2) for 10 min using an ultrasonic probe system. It was observed that ultrasound pre-treatments reduced the drying time required. The shortest drying times were obtained from samples pre-treated at 75.78 Wcm(-2). The fit quality of 6 thin-layer drying models was also evaluated using the determination of coefficient (R(2)), root means square error (RMSE), AIC (Akaike information criterion) and BIC (Bayesian information criterion). Drying kinetics were modelled using the Newton, Henderson and Pabis, Page, Wang and Singh, Midilli et al. and Weibull models. The Newton, Wang and Singh, and Midilli et al. models showed the best fit to the experimental drying data. Color of ultrasound pretreated dried seaweed samples were lighter compared to control samples. It was concluded that ultrasound pretreatment can be effectively used to reduce the energy cost and drying time for drying of A. nodosum. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. 'Sportmotorische Bestandesaufnahme': criterion- vs. norm-based reference values of fitness tests for Swiss first grade children.

    PubMed

    Tomatis, Laura; Krebs, Andreas; Siegenthaler, Jessica; Murer, Kurt; de Bruin, Eling D

    2015-01-01

    Health is closely linked to physical activity and fitness. It is therefore important to monitor fitness in children. Although many reports on physical tests have been published, data comparison between studies is an issue. This study reports Swiss first grade norm values of fitness tests and compares these with criterion reference data. A total of 10,565 boys (7.18 ± 0.42 years) and 10,204 girls (7.14 ± 0.41 years) were tested for standing long jump, plate tapping, 20-m shuttle run, lateral jump and 20-m sprint. Average values for six-, seven- and eight-year-olds were analysed and reference curves for age were constructed. Z-values were generated for comparisons with criterion references reported in the literature. Results were better for all disciplines in seven-year-old first grade children compared to six-year-old children (p < 0.01). Eight-year-old children did not perform better compared to seven-year-old children in the sprint run (p = 0.11), standing long jump (p > 0.99) and shuttle run (p = 0.43), whereas they were better in all other disciplines compared to their younger peers. The average performance of boys was better than girls except for tapping at the age of 8 (p = 0.06). Differences in performance due to testing protocol and setting must be considered when test values from a first grade setting are compared to criterion-based benchmarks. In a classroom setting, younger children tended to have better results and older children tended to have worse outcomes when compared to their age group criterion reference values. Norm reference data are valid allowing comparison with other data generated by similar test protocols applied in a classroom setting.

  17. Performance of PZT stacks under high-field electric cycling at various temperatures in heavy-duty diesel engine fuel injectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong; Lin, Hua-Tay; Stafford, Mr Randy

    2016-01-01

    Testing and characterization of large prototype lead zirconate titanate (PZT) stacks present substantial technical challenges to electronic systems. The work in this study shows that an alternative approach can be pursued by using subunits extracted from prototype stacks. Piezoelectric and dielectric integrity was maintained even though the PZT plate specimens experienced an additional loading process involved with the extraction after factory poling. Extracted 10-layer plate specimens were studied by an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 108 cycles, both at room temperature (22 C) and at 50 C. The elevated temperature had amore » defined impact on the fatigue of PZT stacks. About 48 and 28% reductions were observed in the piezoelectric and dielectric coefficients, respectively, after 108 cycles at 50 C, compared with reductions of 25 and 15% in the respective coefficients at 22 C. At the same time, the loss tangent varied to a limited extent. The evolution of PZT electrode interfacial layers or nearby dielectric layers should account for the difference in the fatigue rates of piezoelectric and dielectric coefficients. But the basic contribution to observed fatigue may result from the buildup of a bias field that finally suppressed the motion of the domain walls. Finally, monitoring of dielectric coefficients can be an effective tool for on-line lifetime prediction of PZT stacks in service if a failure criterion is defined properly.« less

  18. Caractérisation aérodynamique d'un rotor éolien en site naturel

    NASA Astrophysics Data System (ADS)

    Fabre, B.; Coudeville, H.

    1991-03-01

    The C_p/V_s curve (aerodynamic power coefficient versus tip-speed ratio) may be obtained in the field, therefore with different windspeeds, and to varying rotor speed, whithout selection of measures by steady state criterion. Experimentation is made on a small windmill with a straight blades Darrieus turbine and an eddy current converter. It allows us to display the conditions for a satisfying characterisation. So, the histogram of tip-speed ratio's instantaneous values must be as flat as it is possible, this on both sides of the tip-speed ratio's value for which the power coefficient is maximum. La courbe du coefficient de puissance aérodynamique (C_p) en fonction de la vitesse spécifique (V_s) peut être obtenue en champ libre, à vitesse de rotation variable et sans sélection des mesures par des critères de stabilité. L'expérimentation sur un système éolien composé d'un rotor Darrieus à pales droites associé à un convertisseur mécano-thermique nous a permis de mettre en évidence les conditions d'une caractérisation correcte. Notamment, l'histogramme des valeurs instantanées, acquises et traitées, de la vitesse spécifique doit être le plus plat possibie sur une large zone de part et d'autre de la valeur donnant le coefficient de puissance maximal.

  19. Performance of PZT stacks under high-field electric cycling at various temperatures in heavy-duty diesel engine fuel injectors

    NASA Astrophysics Data System (ADS)

    Wang, Hong; Lee, Sung-Min; Lin, Hua-Tay; Stafford, Randy

    2016-04-01

    Testing and characterization of large prototype lead zirconate titanate (PZT) stacks present substantial technical challenges to electronic systems. The work in this study shows that an alternative approach can be pursued by using subunits extracted from prototype stacks. Piezoelectric and dielectric integrity was maintained even though the PZT plate specimens experienced an additional loading process involved with the extraction after factory poling. Extracted 10-layer plate specimens were studied by an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 108 cycles, both at room temperature (22°C) and at 50°C. The elevated temperature had a defined impact on the fatigue of PZT stacks. About 48 and 28% reductions were observed in the piezoelectric and dielectric coefficients, respectively, after 108 cycles at 50°C, compared with reductions of 25 and 15% in the respective coefficients at 22°C. At the same time, the loss tangent varied to a limited extent. The evolution of PZT-electrode interfacial layers or nearby dielectric layers should account for the difference in the fatigue rates of piezoelectric and dielectric coefficients. But the basic contribution to observed fatigue may result from the buildup of a bias field that finally suppressed the motion of the domain walls. Finally, monitoring of dielectric coefficients can be an effective tool for on-line lifetime prediction of PZT stacks in service if a failure criterion is defined properly.

  20. Optical band gap studies on lithium aluminum silicate glasses doped with Cr3+ ions

    NASA Astrophysics Data System (ADS)

    El-Diasty, Fouad; Abdel Wahab, Fathy A.; Abdel-Baki, Manal

    2006-11-01

    Lithium aluminum silicate glass system (LAS) implanted with chromium ions is prepared. The reflectance and transmittance measurements are used to determine the dispersion of absorption coefficient. The optical data are explained in terms of the different oxidation states adopted by the chromium ions into the glass network. It is found that the oxidation state of the chromium depends on its concentration. Across a wide spectral range, 0.2-1.6μm, analysis of the fundamental absorption edge provides values for the average energy band gaps for allowed direct and indirect transitions. The optical absorption coefficient just below the absorption edge varies exponentially with photon energy indicating the presence of Urbach's tail. Such tail is decreased with the increase of the chromium dopant. From the analysis of the optical absorption data, the absorption peak at ground state exciton energy, the absorption at band gap, and the free exciton binding energy are determined. The extinction coefficient data are used to determine the Fermi energy level of the studied glasses. The metallization criterion is obtained and discussed exploring the nature of the glasses. The measured IR spectra of the different glasses are used to throw some light on the optical properties of the present glasses correlating them with their structure and composition.

  1. Background, College Experiences, and the ACT-COMP Exam: Using Construct Validity to Evaluate Assessment Instruments.

    ERIC Educational Resources Information Center

    Pike, Gary R.

    1989-01-01

    A study investigated the appropriateness of the American College Testing Program's College Outcome Measures Program, conducted at the University of Tennessee, Knoxville, by applying the criterion of construct validity. Results indicated that while the test primarily measures individual differences, it is also sensitive to the effects of higher…

  2. Development and Evaluation of Pretraining as an Adjunct to a Pilot Training Study.

    ERIC Educational Resources Information Center

    McFadden, Robert W.; And Others

    The utility of the pretraining of task-relevant cognitive skills within the context of experimental research methodology was investigated in this study. A criterion referenced pretraining multi-media product was developed and applied to support the initial phase of an experimental research effort in which several instructional methods for training…

  3. 40 CFR 93.118 - Criteria and procedures: Motor vehicle emissions budget.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Criteria and procedures: Motor vehicle... Projects Developed, Funded or Approved Under Title 23 U.S.C. or the Federal Transit Laws § 93.118 Criteria...(s) in the applicable implementation plan (or implementation plan submission). This criterion applies...

  4. 40 CFR 85.2224 - Exhaust analysis system-EPA 81.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are solely in effect. The following exceptions apply: In a state where the Administrator has approved... earlier model year vehicles or engines; in a state where the Administrator has approved a SIP revision... dual sample probe must provide equal flow in each leg. The equal flow criterion is considered to be met...

  5. Homology and the optimization of DNA sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, W.

    2001-01-01

    Three methods of nucleotide character analysis are discussed. Their implications for molecular sequence homology and phylogenetic analysis are compared. The criterion of inter-data set congruence, both character based and topological, are applied to two data sets to elucidate and potentially discriminate among these parsimony-based ideas. c2001 The Willi Hennig Society.

  6. Communicative Competence and Commercial Speakers: Applying Habermas in a First Amendment Issue.

    ERIC Educational Resources Information Center

    Griswold, Bill

    Noting that the work of Jurgen Habermas has had an important influence on philosophy and the social sciences recently, this paper examines the implications of using Habermas's "ideal speech situation" as a criterion for deciding issues relating to the First Amendment. The paper first briefly reviews the distinctive features of critical…

  7. Appliance Services. Basic Course. Career Education.

    ERIC Educational Resources Information Center

    Killough, Joseph

    Several intermediate performance objectives and corresponding criterion measures are listed for each of 25 terminal objectives for a basic appliance repair course. The materials were developed for a 36-week course (2 hours daily) designed to enable the student to be well-grounded in the fundamentals of electricity as well as applied electricity.…

  8. The Use of the CPI to Ascertain Differences between More and Less Effective Student Paraprofessional Helpers.

    ERIC Educational Resources Information Center

    German, Steven C.; Cottle, William C.

    1981-01-01

    A study of student paraprofessional peer counselors showed the California Psychological Inventory (CPI) to be useful in rating the effectiveness of Freshman Assistants. Results from individual criterion measures can also be applied to results from combinations of these measures. Future research should control for demographic influences. (JAC)

  9. Adaptive fuzzy controller for thermal comfort inside the air-conditioned automobile chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, L.; Yu, B.; Chen, Z.

    1999-07-01

    In order to meet the passengers' demand for thermal comfort, the adaptive fuzzy logic control design methodology is applied for the automobile airconditioner system. In accordance with the theory of air flow and heat transfer, the air temperature field inside the airconditioned automobile chamber is simulated by a set of simplified half-empirical formula. Then, instead of PMV (Predicted Mean Vote) criterion, RIV (Real Individual Vote) criterion is adopted as the base of the control for passengers' thermal comfort. The proposed controller is applied to the air temperature regulation at the individual passenger position. The control procedure is based on partitioningmore » the state space of the system into cell-groups and fuzzily quantificating the state space into these cells. When the system model has some parameter perturbation, the controller can also adjust its control parameters to compensate for the perturbation and maintain the good performance. The learning procedure shows its ideal effect in both computer simulation and experiments. The final results demonstrate the ideal performance of this adaptive fuzzy controller.« less

  10. Forming limit prediction by an evolving non-quadratic yield criterion considering the anisotropic hardening and r-value evolution

    NASA Astrophysics Data System (ADS)

    Lian, Junhe; Shen, Fuhui; Liu, Wenqi; Münstermann, Sebastian

    2018-05-01

    The constitutive model development has been driven to a very accurate and fine-resolution description of the material behaviour responding to various environmental variable changes. The evolving features of the anisotropic behaviour during deformation, therefore, has drawn particular attention due to its possible impacts on the sheet metal forming industry. An evolving non-associated Hill48 (enHill48) model was recently proposed and applied to the forming limit prediction by coupling with the modified maximum force criterion. On the one hand, the study showed the significance to include the anisotropic evolution for accurate forming limit prediction. On the other hand, it also illustrated that the enHill48 model introduced an instability region that suddenly decreases the formability. Therefore, in this study, an alternative model that is based on the associated flow rule and provides similar anisotropic predictive capability is extended to chapter the evolving effects and further applied to the forming limit prediction. The final results are compared with experimental data as well as the results by enHill48 model.

  11. Hair analysis for cocaine: factors in laboratory contamination studies and their relevance to proficiency sample preparation and hair testing practices.

    PubMed

    Hill, Virginia; Cairns, Thomas; Schaffer, Michael

    2008-03-21

    Hair samples were contaminated by rubbing with cocaine (COC) followed by sweat application, multiple shampoo treatments and storage. The samples were then washed with isopropanol for 15min, followed by sequential aqueous washes totaling 3.5h. The amount of drug in the last wash was used to calculate a wash criterion to determine whether samples were positive due to use or contamination. Analyses of cocaine and metabolites were done by LC/MS/MS. These procedures were applied to samples produced by a U.S. government-sponsored cooperative study, in which this laboratory participated, and to samples in a parallel in-house study. All contaminated samples in both studies were correctly identified as contaminated by cutoff, benzoylecgonine (BE) presence, BE ratio, and/or the wash criterion. A method for determining hair porosity was applied to samples in both studies, and porosity characteristics of hair are discussed as they relate to experimental and real-world contamination of hair, preparation of proficiency survey samples, and analysis of unknown hair samples.

  12. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  13. Measuring dynamic oil film coefficients of sliding bearing

    NASA Technical Reports Server (NTRS)

    Feng, G.; Tang, X.

    1985-01-01

    A method is presented for determining the dynamic coefficients of bearing oil film. By varying the support stiffness and damping, eight dynamic coefficients of the bearing were determined. Simple and easy to apply, the method can be used in solving practical machine problems.

  14. Construct and Criterion Validity of the PedsQL™ 4.0 Instrument (Pediatric Quality of Life Inventory) in Colombia

    PubMed Central

    Amaya-Arias, Ana Carolina; Alzate, Juan Pablo; Eslava-Schmalbach, Javier H

    2017-01-01

    Background: This study aimed at determining the validity of the Pediatric Quality of Life Inventory 4.0 (PedsQL™ 4.0) for the measurement of health-related quality of life (HRQOL) in Colombian children. Methods: Validation study of measurement instruments. The PedsQL™ 4.0 was applied by convenience sampling to 375 pairs of children and adolescents between the ages of 5 and 17 and to their parents-caregivers, as well as to 125 parents-caregivers of children between the ages of 2 and 4 in five cities of Colombia (Bogota, Medellin, Cali, Barranquilla and Bucaramanga). Construct validity was assessed through the use of exploratory and confirmatory factor analysis, and criterion validity was assessed by correlations between the PedsQL™ 4.0 and the KIDSCREEN-27. Results: The instrument was applied to 375 children (ages 5–18) and 125 parents of children between the ages of 2 and 4. Factor analysis revealed four factors considered suitable for the sample in both the child and parent reports, whereas Bartlett's test of sphericity showed inter-correlation between variables. Scale and subscales showed proper indicators of internal consistency. It is recommended not to include or review some of the items in the Colombian version of the scale. Conclusions: The Spanish version for Colombia of the PedsQL™ 4.0 displays suitable indicators of criterion and construct validity, therefore becoming a valuable tool for measuring HRQOL in children in our country. Some modifications are recommended for the Colombian version of the scale. PMID:28900536

  15. Comparison of different criteria for periodontitis case definition in head and neck cancer individuals.

    PubMed

    Bueno, Audrey Cristina; Ferreira, Raquel Conceição; Cota, Luis Otávio Miranda; Silva, Guilherme Carvalho; Magalhães, Cláudia Silami; Moreira, Allyson Nogueira

    2015-09-01

    Different periodontitis case definitions have been used in clinical research and epidemiology. The aim of this study was to determine more accurate criterion for the definition of mild and moderate periodontitis case to be applied to head and neck cancer individuals before radiotherapy. The frequency of periodontitis in a sample of 84 individuals was determined according to different diagnostic criteria: (1) Lopez et al. (2002);(2) Hujoel et al. (2006); (3) Beck et al. (1990); (4) Machtei et al. (1992); (5) Tonetti and Claffey (2005); (6) and Page and Eke (2007). All diagnosis were based on the clinical parameters obtained by a single calibrated examiner (Kw = 0.71). The individuals were evaluated before radiotherapy. They received oral hygiene instructions, and the cases diagnosed with periodontitis (Page and Eke 2007) were treated. The gold standard was the definition 6, and the others were compared by means of agreement, sensitivity (SS), specificity (SP), and the area under ROC curve. The kappa test evaluated the agreement between definitions. The frequency of periodontitis at baseline was 53.6 % (definition 1), 81.0 % (definition 2), 40.5 % (definition 3), 26.2 % (definition 4), 13.1 % (definition 5), and 70.2 % (definition 6). The kappa test showed a moderate agreement between definitions 6 and 2 (59.0 %) and definitions 6 and 1 (56.0 %). The criterion with higher SS (0.92) and SP (0.73) was definition 1. Definition 1 was the most accurate criterion to case periodontitis definition to be applied to head and neck cancer individuals.

  16. Specific Learning Disorder: Prevalence and Gender Differences

    PubMed Central

    Moll, Kristina; Kunze, Sarah; Neuhoff, Nina; Bruder, Jennifer; Schulte-Körne, Gerd

    2014-01-01

    Comprehensive models of learning disorders have to consider both isolated learning disorders that affect one learning domain only, as well as comorbidity between learning disorders. However, empirical evidence on comorbidity rates including all three learning disorders as defined by DSM-5 (deficits in reading, writing, and mathematics) is scarce. The current study assessed prevalence rates and gender ratios for isolated as well as comorbid learning disorders in a representative sample of 1633 German speaking children in 3rd and 4th Grade. Prevalence rates were analysed for isolated as well as combined learning disorders and for different deficit criteria, including a criterion for normal performance. Comorbid learning disorders occurred as frequently as isolated learning disorders, even when stricter cutoff criteria were applied. The relative proportion of isolated and combined disorders did not change when including a criterion for normal performance. Reading and spelling deficits differed with respect to their association with arithmetic problems: Deficits in arithmetic co-occurred more often with deficits in spelling than with deficits in reading. In addition, comorbidity rates for arithmetic and reading decreased when applying stricter deficit criteria, but stayed high for arithmetic and spelling irrespective of the chosen deficit criterion. These findings suggest that the processes underlying the relationship between arithmetic and reading might differ from those underlying the relationship between arithmetic and spelling. With respect to gender ratios, more boys than girls showed spelling deficits, while more girls were impaired in arithmetic. No gender differences were observed for isolated reading problems and for the combination of all three learning disorders. Implications of these findings for assessment and intervention of learning disorders are discussed. PMID:25072465

  17. Specific learning disorder: prevalence and gender differences.

    PubMed

    Moll, Kristina; Kunze, Sarah; Neuhoff, Nina; Bruder, Jennifer; Schulte-Körne, Gerd

    2014-01-01

    Comprehensive models of learning disorders have to consider both isolated learning disorders that affect one learning domain only, as well as comorbidity between learning disorders. However, empirical evidence on comorbidity rates including all three learning disorders as defined by DSM-5 (deficits in reading, writing, and mathematics) is scarce. The current study assessed prevalence rates and gender ratios for isolated as well as comorbid learning disorders in a representative sample of 1633 German speaking children in 3rd and 4th Grade. Prevalence rates were analysed for isolated as well as combined learning disorders and for different deficit criteria, including a criterion for normal performance. Comorbid learning disorders occurred as frequently as isolated learning disorders, even when stricter cutoff criteria were applied. The relative proportion of isolated and combined disorders did not change when including a criterion for normal performance. Reading and spelling deficits differed with respect to their association with arithmetic problems: Deficits in arithmetic co-occurred more often with deficits in spelling than with deficits in reading. In addition, comorbidity rates for arithmetic and reading decreased when applying stricter deficit criteria, but stayed high for arithmetic and spelling irrespective of the chosen deficit criterion. These findings suggest that the processes underlying the relationship between arithmetic and reading might differ from those underlying the relationship between arithmetic and spelling. With respect to gender ratios, more boys than girls showed spelling deficits, while more girls were impaired in arithmetic. No gender differences were observed for isolated reading problems and for the combination of all three learning disorders. Implications of these findings for assessment and intervention of learning disorders are discussed.

  18. A Tactile Sensor Using Piezoresistive Beams for Detection of the Coefficient of Static Friction

    PubMed Central

    Okatani, Taiyu; Takahashi, Hidetoshi; Noda, Kentaro; Takahata, Tomoyuki; Matsumoto, Kiyoshi; Shimoyama, Isao

    2016-01-01

    This paper reports on a tactile sensor using piezoresistive beams for detection of the coefficient of static friction merely by pressing the sensor against an object. The sensor chip is composed of three pairs of piezoresistive beams arranged in parallel and embedded in an elastomer; this sensor is able to measure the vertical and lateral strains of the elastomer. The coefficient of static friction is estimated from the ratio of the fractional resistance changes corresponding to the sensing elements of vertical and lateral strains when the sensor is in contact with an object surface. We applied a normal force on the sensor surface through objects with coefficients of static friction ranging from 0.2 to 1.1. The fractional resistance changes corresponding to vertical and lateral strains were proportional to the applied force. Furthermore, the relationship between these responses changed according to the coefficients of static friction. The experimental result indicated the proposed sensor could determine the coefficient of static friction before a global slip occurs. PMID:27213374

  19. Ion radial diffusion in an electrostatic impulse model for stormtime ring current formation

    NASA Technical Reports Server (NTRS)

    Chen, Margaret W.; Schulz, Michael; Lyons, Larry R.; Gorney, David J.

    1992-01-01

    Two refinements to the quasi-linear theory of ion radial diffusion are proposed and examined analytically with simulations of particle trajectories. The resonance-broadening correction by Dungey (1965) is applied to the quasi-linear diffusion theory by Faelthammar (1965) for an individual model storm. Quasi-linear theory is then applied to the mean diffusion coefficients resulting from simulations of particle trajectories in 20 model storms. The correction for drift-resonance broadening results in quasi-linear diffusion coefficients with discrepancies from the corresponding simulated values that are reduced by a factor of about 3. Further reductions in the discrepancies are noted following the averaging of the quasi-linear diffusion coefficients, the simulated coefficients, and the resonance-broadened coefficients for the 20 storms. Quasi-linear theory provides good descriptions of particle transport for a single storm but performs even better in conjunction with the present ensemble-averaging.

  20. Spatial extremes modeling applied to extreme precipitation data in the state of Paraná

    NASA Astrophysics Data System (ADS)

    Olinda, R. A.; Blanchet, J.; dos Santos, C. A. C.; Ozaki, V. A.; Ribeiro, P. J., Jr.

    2014-11-01

    Most of the mathematical models developed for rare events are based on probabilistic models for extremes. Although the tools for statistical modeling of univariate and multivariate extremes are well developed, the extension of these tools to model spatial extremes includes an area of very active research nowadays. A natural approach to such a modeling is the theory of extreme spatial and the max-stable process, characterized by the extension of infinite dimensions of multivariate extreme value theory, and making it possible then to incorporate the existing correlation functions in geostatistics and therefore verify the extremal dependence by means of the extreme coefficient and the Madogram. This work describes the application of such processes in modeling the spatial maximum dependence of maximum monthly rainfall from the state of Paraná, based on historical series observed in weather stations. The proposed models consider the Euclidean space and a transformation referred to as space weather, which may explain the presence of directional effects resulting from synoptic weather patterns. This method is based on the theorem proposed for de Haan and on the models of Smith and Schlather. The isotropic and anisotropic behavior of these models is also verified via Monte Carlo simulation. Estimates are made through pairwise likelihood maximum and the models are compared using the Takeuchi Information Criterion. By modeling the dependence of spatial maxima, applied to maximum monthly rainfall data from the state of Paraná, it was possible to identify directional effects resulting from meteorological phenomena, which, in turn, are important for proper management of risks and environmental disasters in countries with its economy heavily dependent on agribusiness.

Top