Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A
1980-12-01
1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.
Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling
2016-01-01
To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Influence of eye micromotions on spatially resolved refractometry
NASA Astrophysics Data System (ADS)
Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Osipova, Irina Y.
2001-01-01
The influence eye micromotions on the accuracy of estimation of Zernike coefficients form eye transverse aberration measurements was investigated. By computer modeling, the following found eye aberrations have been examined: defocusing, primary astigmatism, spherical aberration of the 3rd and the 5th orders, as well as their combinations. It was determined that the standard deviation of estimated Zernike coefficients is proportional to the standard deviation of angular eye movements. Eye micromotions cause the estimation errors of Zernike coefficients of present aberrations and produce the appearance of Zernike coefficients of aberrations, absent in the eye. When solely defocusing is present, the biggest errors, cased by eye micromotions, are obtained for aberrations like coma and astigmatism. In comparison with other aberrations, spherical aberration of the 3rd and the 5th orders evokes the greatest increase of the standard deviation of other Zernike coefficients.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
A Note on the Estimator of the Alpha Coefficient for Standardized Variables Under Normality
ERIC Educational Resources Information Center
Hayashi, Kentaro; Kamata, Akihito
2005-01-01
The asymptotic standard deviation (SD) of the alpha coefficient with standardized variables is derived under normality. The research shows that the SD of the standardized alpha coefficient becomes smaller as the number of examinees and/or items increase. Furthermore, this research shows that the degree of the dependence of the SD on the number of…
Quantifying relative importance: Computing standardized effects in models with binary outcomes
Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.
2018-01-01
Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
NASA Astrophysics Data System (ADS)
Osipova, Irina Y.; Chyzh, Igor H.
2001-06-01
The influence of eye jumps on the accuracy of estimation of Zernike coefficients from eye transverse aberration measurements was investigated. By computer modeling the ametropy and astigmatism have been examined. The standard deviation of the wave aberration function was calculated. It was determined that the standard deviation of the wave aberration function achieves the minimum value if the number of scanning points is equal to the number of eye jumps in scanning period. The recommendations for duration of measurement were worked out.
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1980-01-01
A technique for fitting a straight line to a collection of data points is given. The relationships between the slopes and correlation coefficients, and between the corresponding standard deviations and correlation coefficient are given.
Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.
Shao, Lijing
2014-03-21
The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.
Changes in deviation of absorbed dose to water among users by chamber calibration shift.
Katayose, Tetsurou; Saitoh, Hidetoshi; Igari, Mitsunobu; Chang, Weishan; Hashimoto, Shimpei; Morioka, Mie
2017-07-01
The JSMP01 dosimetry protocol had adopted the provisional 60 Co calibration coefficient [Formula: see text], namely, the product of exposure calibration coefficient N C and conversion coefficient k D,X . After that, the absorbed dose to water D w standard was established, and the JSMP12 protocol adopted the [Formula: see text] calibration. In this study, the influence of the calibration shift on the measurement of D w among users was analyzed. The intercomparison of the D w using an ionization chamber was annually performed by visiting related hospitals. Intercomparison results before and after the calibration shift were analyzed, the deviation of D w among users was re-evaluated, and the cause of deviation was estimated. As a result, the stability of LINAC, calibration of the thermometer and barometer, and collection method of ion recombination were confirmed. The statistical significance of standard deviation of D w was not observed, but that of difference of D w among users was observed between N C and [Formula: see text] calibration. Uncertainty due to chamber-to-chamber variation was reduced by the calibration shift, consequently reducing the uncertainty among users regarding D w . The result also pointed out uncertainty might be reduced by accurate and detailed instructions on the setup of an ionization chamber.
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)
1982-01-01
Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui
2010-01-01
Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.
Perturbed effects at radiation physics
NASA Astrophysics Data System (ADS)
Külahcı, Fatih; Şen, Zekâi
2013-09-01
Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.
Naff, R.L.
1998-01-01
The late-time macrodispersion coefficients are obtained for the case of flow in the presence of a small-scale deterministic transient in a three-dimensional anisotropic, heterogeneous medium. The transient is assumed to affect only the velocity component transverse to the mean flow direction and to take the form of a periodic function. For the case of a highly stratified medium, these late-time macrodispersion coefficients behave largely as the standard coefficients used in the transport equation. Only in the event that the medium is isotropic is it probable that significant deviations from the standard coefficients would occur.
Quantifying expert diagnosis variability when grading tumor-infiltrating lymphocytes
NASA Astrophysics Data System (ADS)
Toro, Paula; Corredor, Germán.; Wang, Xiangxue; Arias, Viviana; Velcheti, Vamsidhar; Madabhushi, Anant; Romero, Eduardo
2017-11-01
Tumor-infiltrating lymphocytes (TILs) have proved to play an important role in predicting prognosis, survival, and response to treatment in patients with a variety of solid tumors. Unfortunately, currently, there are not a standardized methodology to quantify the infiltration grade. The aim of this work is to evaluate variability among the reports of TILs given by a group of pathologists who examined a set of digitized Non-Small Cell Lung Cancer samples (n=60). 28 pathologists answered a different number of histopathological images. The agreement among pathologists was evaluated by computing the Kappa index coefficient and the standard deviation of their estimations. Furthermore, TILs reports were correlated with patient's prognosis and survival using the Pearson's correlation coefficient. General results show that the agreement among experts grading TILs in the dataset is low since Kappa values remain below 0.4 and the standard deviation values demonstrate that in none of the images there was a full consensus. Finally, the correlation coefficient for each pathologist also reveals a low association between the pathologists' predictions and the prognosis/survival data. Results suggest the need of defining standardized, objective, and effective strategies to evaluate TILs, so they could be used as a biomarker in the daily routine.
NASA Technical Reports Server (NTRS)
Clark, P. E.; Andre, C. G.; Adler, I.; Weidner, J.; Podwysocki, M.
1976-01-01
The positive correlation between Al/Si X-ray fluorescence intensity ratios determined during the Apollo 15 lunar mission and a broad-spectrum visible albedo of the moon is quantitatively established. Linear regression analysis performed on 246 1 degree geographic cells of X-ray fluorescence intensity and visible albedo data points produced a statistically significant correlation coefficient of .78. Three distinct distributions of data were identified as (1) within one standard deviation of the regression line, (2) greater than one standard deviation below the line, and (3) greater than one standard deviation above the line. The latter two distributions of data were found to occupy distinct geographic areas in the Palus Somni region.
NASA Technical Reports Server (NTRS)
Anspaugh, B. E.; Miyahira, T. F.; Weiss, R. S.
1979-01-01
Computed statistical averages and standard deviations with respect to the measured cells for each intensity temperature measurement condition are presented. Display averages and standard deviations of the cell characteristics in a two dimensional array format are shown: one dimension representing incoming light intensity, and another, the cell temperature. Programs for calculating the temperature coefficients of the pertinent cell electrical parameters are presented, and postirradiation data are summarized.
Singer, Adam D; Pattany, Pradip M; Fayad, Laura M; Tresley, Jonathan; Subhawong, Ty K
2016-01-01
Determine interobserver concordance of semiautomated three-dimensional volumetric and two-dimensional manual measurements of apparent diffusion coefficient (ADC) values in soft tissue masses (STMs) and explore standard deviation (SD) as a measure of tumor ADC heterogeneity. Concordance correlation coefficients for mean ADC increased with more extensive sampling. Agreement on the SD of tumor ADC values was better for large regions of interest and multislice methods. Correlation between mean and SD ADC was low, suggesting that these parameters are relatively independent. Mean ADC of STMs can be determined by volumetric quantification with high interobserver agreement. STM heterogeneity merits further investigation as a potential imaging biomarker that complements other functional magnetic resonance imaging parameters. Copyright © 2016 Elsevier Inc. All rights reserved.
Anthropometry of airline stewardesses.
DOT National Transportation Integrated Search
1975-03-01
The report presents the body measurements of 423 stewardess trainees enrolled in the American Airlines Stewardess Training Academy in Fort Worth, Texas, between February and June 1971. It includes the means, standard deviations, coefficients of varia...
A method for predicting the noise levels of coannular jets with inverted velocity profiles
NASA Technical Reports Server (NTRS)
Russell, J. W.
1979-01-01
A coannular jet was equated with a single stream equivalent jet with the same mass flow, energy, and thrust. The acoustic characteristics of the coannular jet were then related to the acoustic characteristics of the single jet. Forward flight effects were included by incorporating a forward exponent, a Doppler amplification factor, and a Strouhal frequency shift. Model test data, including 48 static cases and 22 wind tunnel cases, were used to evaluate the prediction method. For the static cases and the low forward velocity wind tunnel cases, the spectral mean square pressure correlation coefficients were generally greater than 90 percent, and the spectral sound pressure level standard deviation were generally less than 3 decibels. The correlation coefficient and the standard deviation were not affected by changes in equivalent jet velocity. Limitations of the prediction method are also presented.
Laser transit anemometer software development program
NASA Technical Reports Server (NTRS)
Abbiss, John B.
1989-01-01
Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.
Cape Canaveral, Florida range reference atmosphere 0-70 km altitude
NASA Technical Reports Server (NTRS)
Tingle, A. (Editor)
1983-01-01
The RRA contains tabulations for monthly and annual means, standard deviations, skewness coefficients for wind speed, pressure temperature, density, water vapor pressure, virtual temperature, dew-point temperature, and the means and standard deviations for the zonal and meridional wind components and the linear (product moment) correlation coefficient between the wind components. These statistical parameters are tabulated at the station elevation and at 1 km intervals from sea level to 30 km and at 2 km intervals from 30 to 90 km altitude. The wind statistics are given at approximately 10 m above the station elevations and at altitudes with respect to mean sea level thereafter. For those range sites without rocketsonde measurements, the RRAs terminate at 30 km altitude or they are extended, if required, when rocketsonde data from a nearby launch site are available. There are four sets of tables for each of the 12 monthly reference periods and the annual reference period.
Estimation of octanol/water partition coefficients using LSER parameters
Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.
1998-01-01
The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finley, C; Dave, J
Purpose: To characterize noise for image receptors of digital radiography systems based on pixel variance. Methods: Nine calibrated digital image receptors associated with nine new portable digital radiography systems (Carestream Health, Inc., Rochester, NY) were used in this study. For each image receptor, thirteen images were acquired with RQA5 beam conditions for input detector air kerma ranging from 0 to 110 µGy, and linearized ‘For Processing’ images were extracted. Mean pixel value (MPV), standard deviation (SD) and relative noise (SD/MPV) were obtained from each image using ROI sizes varying from 2.5×2.5 to 20×20 mm{sup 2}. Variance (SD{sup 2}) was plottedmore » as a function of input detector air kerma and the coefficients of the quadratic fit were used to derive structured, quantum and electronic noise coefficients. Relative noise was also fitted as a function of input detector air kerma to identify noise sources. The fitting functions used least-squares approach. Results: The coefficient of variation values obtained using different ROI sizes was less than 1% for all the images. The structured, quantum and electronic coefficients obtained from the quadratic fit of variance (r>0.97) were 0.43±0.10, 3.95±0.27 and 2.89±0.74 (mean ± standard deviation), respectively, indicating that overall the quantum noise was the dominant noise source. However, for one system electronic noise coefficient (3.91) was greater than quantum noise coefficient (3.56) indicating electronic noise to be dominant. Using relative noise values, the power parameter of the fitting equation (|r|>0.93) showed a mean and standard deviation of 0.46±0.02. A 0.50 value for this power parameter indicates quantum noise to be the dominant noise source whereas values around 0.50 indicate presence of other noise sources. Conclusion: Characterizing noise from pixel variance assists in identifying contributions from various noise sources that, eventually, may affect image quality. This approach may be integrated during periodic quality assessments of digital image receptors.« less
Modeling of nutation-precession: Very long baseline interferometry results
NASA Astrophysics Data System (ADS)
Herring, T. A.; Mathews, P. M.; Buffett, B. A.
2002-04-01
Analysis of over 20 years of very long baseline interferometry data (VLBI) yields estimates of the coefficients of the nutation series with standard deviations ranging from 5 microseconds of arc (μas) for the terms with periods <400 days to 38 μas for the longest-period terms. The largest deviations between the VLBI estimates of the amplitudes of terms in the nutation series and the theoretical values from the Mathews-Herring-Buffett (MHB2000) nutation series are 56 +/- 38 μas (associated with two of the 18.6 year nutations). The amplitudes of nutational terms with periods <400 days deviate from the MHB2000 nutation series values at the level standard deviation. The estimated correction to the IAU-1976 precession constant is -2.997 +/- 0.008 mas yr-1 when the coefficients of the MHB2000 nutation series are held fixed and is consistent with that inferred from the MHB2000 nutation theory. The secular change in the obliquity of the ecliptic is estimated to be -0.252 +/- 0.003 mas yr-1. When the coefficients of the largest-amplitude terms in the nutation series are estimated, the precession constant correction and obliquity rate are estimated to be -2.960 +/- 0.030 and -0.237 +/- 0.012 mas yr-1. Significant variations in the freely excited retrograde free core nutation mode are observed over the 20 years. During this time the amplitude has decreased from ~300 +/- 50 μas in the mid-1980s to nearly zero by the year 2000. There is evidence that the amplitude of the mode in now increasing again.
Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi
2012-03-01
Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.
Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.
ERIC Educational Resources Information Center
Sands, William A.
1978-01-01
Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)
[Research on rapid and quantitative detection method for organophosphorus pesticide residue].
Sun, Yuan-Xin; Chen, Bing-Tai; Yi, Sen; Sun, Ming
2014-05-01
The methods of physical-chemical inspection is adopted in the traditional pesticide residue detection, which require a lot of pretreatment processes, are time-consuming and complicated. In the present study, the authors take chlorpyrifos applied widely in the present agricultural field as the research object and propose a rapid and quantitative detection method for organophosphorus pesticide residues. At first, according to the chemical characteristics of chlorpyrifos and comprehensive chromogenic effect of several colorimetric reagents and secondary pollution, the pretreatment of the scheme of chromogenic reaction of chlorpyrifos with resorcin in a weak alkaline environment was determined. Secondly, by analyzing Uv-Vis spectrum data of chlorpyrifos samples whose content were between 0. 5 and 400 mg kg-1, it was confirmed that the characteristic information after the color reaction mainly was concentrated among 360 approximately 400 nm. Thirdly, the full spectrum forecasting model was established based on the partial least squares, whose correlation coefficient of calibration was 0. 999 6, correlation coefficient of prediction reached 0. 995 6, standard deviation of calibration (RMSEC) was 2. 814 7 mg kg-1, and standard deviation of verification (RMSEP) was 8. 012 4 mg kg-1. Fourthly, the wavelengths whose center wavelength is 400 nm was extracted as characteristic region to build a forecasting model, whose correlation coefficient of calibration was 0. 999 6, correlation coefficient of prediction reached 0. 999 3, standard deviation of calibration (RMSEC) was 2. 566 7 mg kg-1 , standard deviation of verification (RMSEP) was 4. 886 6 mg kg-1, respectively. At last, by analyzing the near infrared spectrum data of chlorpyrifos samples with contents between 0. 5 and 16 mg kg-1, the authors found that although the characteristics of the chromogenic functional group are not obvious, the change of absorption peaks of resorcin itself in the neighborhood of 5 200 cm-' happens. The above-mentioned experimental results show that the proposed method is effective and feasible for rapid and quantitative detection prediction for organophosphorus pesticide residues. In the method, the information in full spectrum especially UV-Vis spectrum is strengthened by chromogenic reaction of a colorimetric reagent, which provides a new way of rapid detection of pesticide residues for agricultural products in the future.
Nuclear isospin effect on α-decay half-lives
NASA Astrophysics Data System (ADS)
Akrawy, Dashty T.; Hassanabadi, H.; Hosseini, S. S.; Santhosh, K. P.
2018-07-01
The α-decay half-lives for the even-even, even-odd, odd-even and odd-odd of 356 nuclei in the range 52 ≤Zp ≤ 118 have been studied within the analytical formula of Royer and also within the modified analytical formula of Royer. We calculated the new coefficient of the Royer by fitting 356 isotopes. Also, we considered the Denisov and Khudenko formula and obtained the new coefficient for the modified Denisov and Khudenko formula. We calculated the standard deviation and the average deviation. Analytical results are compared with the experimental data. The results are in better agreement with the experimental data when the effect of the isospin considered for the parent nuclei.
On the linear relation between the mean and the standard deviation of a response time distribution.
Wagenmakers, Eric-Jan; Brown, Scott
2007-07-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.
Determination of the optimal level for combining area and yield estimates
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.
1981-01-01
Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Seismic zoning (first approximation) using data of the main geomagnetic field
NASA Astrophysics Data System (ADS)
Khachikyan, Galina; Zhumabayev, Beibit; Toyshiev, Nursultan; Kairatkyzy, Dina; Seraliyev, Alibek; Khassanov, Eldar
2017-04-01
Seismic zoning is among the most complicated and extremely important problems of modern seismology. In solving this problem, a very important parameter is maximal possible earthquake magnitude (Mmax) which is believed at present depends on horizontal size of geoblocks. At the same time, it was found by Khachikyan et al. [2012, IJG, doi: 10.4236/ijg.2012.35109] that Mmax value in any seismic region may be determined using Z_GSM value that is geomagnetic Z-component in this region estimated in geocentric solar-magnetosphere coordinate system (GSM). On the base of the global seismological catalog NEIC with M≥4.5 for 1973-2010 years, and the International Geomagnetic Reference Field (IGRF) model, an empirical relation was obtained as follows: Mmax= a + b {log[abs(Z_GSM)]}. For the case of the whole planet, obtained empirical coefficients are as follows: a = (5,22 ± 0,17), and b = (0,78 ± 0,06) with correlation coefficient R=0.91, standard deviation SD=0.56, and probability 95%. Further investigations showed that the coefficients of the regression equation are different for different seismically active regions of the planet. For example, to the territory of the San Andreas Fault, defined by the coordinates 30-45N, 105-135W obtained values are as follows: a = (4,04 ± 0.38) and b = (0.7 ± 0.13) with correlation coefficient R = 0.91, standard deviation SD = 0.34, and probability of 95%. For territory of inland seismicity in Eurasia defined by the coordinates 30-45N, 0-110E, a = (12.44 ± 0.48) and b = (1,15 ± 0.2) with correlation coefficient R = 0.87, standard deviation SD = 0.98, and probability of 95%, and for the territory of the strongest seismicity in the world defined by the coordinates 20S-20N, 90-150E, obtained values of a = (- 17.5 ± 1,5) and b = (5,7 ± 0.4) with correlation coefficient R = 0.97, standard deviation SD = 0.4, and probability of 95%. The relationship between the intensity of the main geomagnetic field and released seismic energy is expectable, because both the main geomagnetic field and the tectonic activity of the planet originate from the same source - the convection in the Earth's liquid core. The relationship between earthquake magnitude and geomagnetic Z - component expressed namely in geocentric solar magnetosphere coordinate system (GSM), in which the interaction of the solar wind magnetic field with the geomagnetic field is better ordered, indicates at the external (triggering) earthquake occurrence in the extremely stressed tectonic area. Above empirical relationships may be used (in first approximation) for global seismic zoning and for prediction of possible Mmax, when a place and time of earthquake occurrence are predicted. In report we present global maps of Z_GSM and Mmax estimated for different seasons and different times.
Determinants of ocular deviation in esotropic subjects under general anesthesia.
Daien, Vincent; Turpin, Chloé; Lignereux, François; Belghobsi, Riadh; Le Meur, Guylene; Lebranchu, Pierre; Pechereau, Alain
2013-01-01
The authors attempted to identify the determinants of ocular deviation in a population of patients with esotropia under general anesthesia. Forty-one patients with esotropia were included. Horizontal ocular deviation was evaluated by the photographic Hirschberg test both in the awakened state and under general anesthesia before surgery. Changes in ocular deviation were measured and a multivariate analysis was used to assess its clinical determinants. The mean age (± standard deviation [SD]) of study subjects was 13 ± 11 years and 51% were females. The mean spherical equivalent refraction of the right eye was 2.44 ± 2.50 diopters (D), with no significant difference between eyes (P = .26). The mean ocular deviation changed significantly, from 33.5 ± 12.5 prism diopters (PD) at preoperative examination to 8.8 ± 11.4 PD under general anesthesia (P = .0001). The changes in ocular deviation positively correlated with the pre-operative ocular deviation (correlation coefficient r = 0.59, P = .0001) and negatively correlated with patient age (correlation coefficient r = -0.53, P = .0001). These two determinants remained significant after multivariate adjustment of the following variables: preoperative ocular deviation; age; gender; spherical equivalent refraction; and number of previous strabismus surgeries (model r(2) = 0.49, P = .0001). The ocular position under general anesthesia was reported as a key factor in the surgical treatment of subjects with esotropia; therefore, its clinical determinants were assessed. The authors observed that preoperative ocular deviation and patient age were the main factors that influenced the ocular position under general anesthesia. Copyright 2013, SLACK Incorporated.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Coherent scattering of a spherical wave from an irregular surface. [antenna pattern effects
NASA Technical Reports Server (NTRS)
Fung, A. K.
1983-01-01
The scattering of a spherical wave from a rough surface using the Kirchhoff approximation is considered. An expression representing the measured coherent scattering coefficient is derived. It is shown that the sphericity of the wavefront and the antenna pattern can become an important factor in the interpretation of ground-based measurements. The condition under which the coherent scattering-coefficient expression reduces to that corresponding to a plane wave incidence is given. The condition under which the result reduces to the standard image solution is also derived. In general, the consideration of antenna pattern and sphericity is unimportant unless the surface-height standard deviation is small, i.e., unless the coherent scattering component is significant. An application of the derived coherent backscattering coefficient together with the existing incoherent scattering coefficient to interpret measurements from concrete and asphalt surfaces is shown.
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales
ERIC Educational Resources Information Center
Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.
2009-01-01
The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…
NASA Astrophysics Data System (ADS)
Wang, Zian; Li, Shiguang; Yu, Ting
2015-12-01
This paper propose online identification method of regional frequency deviation coefficient based on the analysis of interconnected grid AGC adjustment response mechanism of regional frequency deviation coefficient and the generator online real-time operation state by measured data through PMU, analyze the optimization method of regional frequency deviation coefficient in case of the actual operation state of the power system and achieve a more accurate and efficient automatic generation control in power system. Verify the validity of the online identification method of regional frequency deviation coefficient by establishing the long-term frequency control simulation model of two-regional interconnected power system.
NASA Astrophysics Data System (ADS)
Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William
2017-02-01
For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.
The gait standard deviation, a single measure of kinematic variability.
Sangeux, Morgan; Passmore, Elyse; Graham, H Kerr; Tirosh, Oren
2016-05-01
Measurement of gait kinematic variability provides relevant clinical information in certain conditions affecting the neuromotor control of movement. In this article, we present a measure of overall gait kinematic variability, GaitSD, based on combination of waveforms' standard deviation. The waveform standard deviation is the common numerator in established indices of variability such as Kadaba's coefficient of multiple correlation or Winter's waveform coefficient of variation. Gait data were collected on typically developing children aged 6-17 years. Large number of strides was captured for each child, average 45 (SD: 11) for kinematics and 19 (SD: 5) for kinetics. We used a bootstrap procedure to determine the precision of GaitSD as a function of the number of strides processed. We compared the within-subject, stride-to-stride, variability with the, between-subject, variability of the normative pattern. Finally, we investigated the correlation between age and gait kinematic, kinetic and spatio-temporal variability. In typically developing children, the relative precision of GaitSD was 10% as soon as 6 strides were captured. As a comparison, spatio-temporal parameters required 30 strides to reach the same relative precision. The ratio stride-to-stride divided by normative pattern variability was smaller in kinematic variables (the smallest for pelvic tilt, 28%) than in kinetic and spatio-temporal variables (the largest for normalised stride length, 95%). GaitSD had a strong, negative correlation with age. We show that gait consistency may stabilise only at, or after, skeletal maturity. Copyright © 2016 Elsevier B.V. All rights reserved.
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying
2013-01-01
Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying
2013-09-01
Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.
Transport Coefficients from Large Deviation Functions
NASA Astrophysics Data System (ADS)
Gao, Chloe; Limmer, David
2017-10-01
We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
Communication: Non-Hadwiger terms in morphological thermodynamics of fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen-Goos, Hendrik, E-mail: hendrik.hansen-goos@uni-tuebingen.de
We demonstrate that the Hadwiger form of the free energy of a fluid in contact with a wall is insufficient to describe the low-density behavior of a hard-sphere fluid. This implies that morphological thermodynamics of the hard-sphere fluid is an approximate theory if only four geometric measures are included. In order to quantify deviations from the Hadwiger form we extend standard fundamental measure theory of the bulk fluid by introducing additional scaled-particle variables which allow for the description of non-Hadwiger coefficients. The theory is in excellent agreement with recent computer simulations. The fact that the leading non-Hadwiger coefficient is onemore » order of magnitude smaller than the smallest Hadwiger coefficient lends confidence to the numerous results that have been previously obtained within standard morphological thermodynamics.« less
Cavusoglu, M; Ciloglu, T; Serinagaoglu, Y; Kamasak, M; Erogul, O; Akcam, T
2008-08-01
In this paper, 'snore regularity' is studied in terms of the variations of snoring sound episode durations, separations and average powers in simple snorers and in obstructive sleep apnoea (OSA) patients. The goal was to explore the possibility of distinguishing among simple snorers and OSA patients using only sleep sound recordings of individuals and to ultimately eliminate the need for spending a whole night in the clinic for polysomnographic recording. Sequences that contain snoring episode durations (SED), snoring episode separations (SES) and average snoring episode powers (SEP) were constructed from snoring sound recordings of 30 individuals (18 simple snorers and 12 OSA patients) who were also under polysomnographic recording in Gülhane Military Medical Academy Sleep Studies Laboratory (GMMA-SSL), Ankara, Turkey. Snore regularity is quantified in terms of mean, standard deviation and coefficient of variation values for the SED, SES and SEP sequences. In all three of these sequences, OSA patients' data displayed a higher variation than those of simple snorers. To exclude the effects of slow variations in the base-line of these sequences, new sequences that contain the coefficient of variation of the sample values in a 'short' signal frame, i.e., short time coefficient of variation (STCV) sequences, were defined. The mean, the standard deviation and the coefficient of variation values calculated from the STCV sequences displayed a stronger potential to distinguish among simple snorers and OSA patients than those obtained from the SED, SES and SEP sequences themselves. Spider charts were used to jointly visualize the three parameters, i.e., the mean, the standard deviation and the coefficient of variation values of the SED, SES and SEP sequences, and the corresponding STCV sequences as two-dimensional plots. Our observations showed that the statistical parameters obtained from the SED and SES sequences, and the corresponding STCV sequences, possessed a strong potential to distinguish among simple snorers and OSA patients, both marginally, i.e., when the parameters are examined individually, and jointly. The parameters obtained from the SEP sequences and the corresponding STCV sequences, on the other hand, did not have a strong discrimination capability. However, the joint behaviour of these parameters showed some potential to distinguish among simple snorers and OSA patients.
Impact of heterozygosity and heterogeneity on cotton lint yield stability: II. Lint yield components
USDA-ARS?s Scientific Manuscript database
In order to determine which yield components may contribute to yield stability, an 18-environment field study was undertaken to observe the mean, standard deviation (SD), and coefficient of variation (CV) for cotton lint yield components in population types that differed for lint yield stability. Th...
Zhu, Y Q; Long, Q; Xiao, Q F; Zhang, M; Wei, Y L; Jiang, H; Tang, B
2018-03-13
Objective: To investigate the association of blood pressure variability and sleep stability in essential hypertensive patients with sleep disorder by cardiopulmonary coupling. Methods: Performed according to strict inclusion and exclusion criteria, 88 new cases of essential hypertension who came from the international department and the cardiology department of china-japan friendship hospital were enrolled. Sleep stability and 24 h ambulatory blood pressure data were collected by the portable sleep monitor based on cardiopulmonary coupling technique and 24 h ambulatory blood pressure monitor. Analysis the correlation of blood pressure variability and sleep stability. Results: In the nighttime, systolic blood pressure standard deviation, systolic blood pressure variation coefficient, the ratio of the systolic blood pressure minimum to the maximum, diastolic blood pressure standard deviation, diastolic blood pressure variation coefficient were positively correlated with unstable sleep duration ( r =0.185, 0.24, 0.237, 0.43, 0.276, P <0.05). Conclusions: Blood pressure variability is associated with sleep stability, especially at night, the longer the unstable sleep duration, the greater the variability in night blood pressure.
Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina
2016-05-01
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.
Hardie, Andrew D; Egbert, Robert E; Rissing, Michael S
2015-01-01
Diffusion-weighted magnetic resonance imaging (DW-MR) can be useful in the differentiation of hemangiomata from liver metastasis, but improved methods other than by mean apparent diffusion coefficient (mADC) are needed. A retrospective review identified 109 metastatic liver lesions and 86 hemangiomata in 128 patients who had undergone DW-MR. For each lesion, mADC and the standard deviation of the mean ADC (sdADC) were recorded and compared by receiver operating characteristic analysis. Mean mADC was higher in benign hemangiomata (1.52±0.12 mm(2)/s) than in liver metastases (1.33±0.18 mm(2)/s), but there was significant overlap in values. The mean sdADC was lower in hemangiomata (101±17 mm(2)/s) than metastases (245±25 mm(2)/s) and demonstrated no overlap in values, which was significantly different (P<.0001). Hemangiomata may be better able to be differentiated from liver metastases on the basis of sdADC than by mADC, although further studies are needed. Copyright © 2015 Elsevier Inc. All rights reserved.
System statistical reliability model and analysis
NASA Technical Reports Server (NTRS)
Lekach, V. S.; Rood, H.
1973-01-01
A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.
A Spatio-Temporal Approach for Global Validation and Analysis of MODIS Aerosol Products
NASA Technical Reports Server (NTRS)
Ichoku, Charles; Chu, D. Allen; Mattoo, Shana; Kaufman, Yoram J.; Remer, Lorraine A.; Tanre, Didier; Slutsker, Ilya; Holben, Brent N.; Lau, William K. M. (Technical Monitor)
2001-01-01
With the launch of the MODIS sensor on the Terra spacecraft, new data sets of the global distribution and properties of aerosol are being retrieved, and need to be validated and analyzed. A system has been put in place to generate spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of the MODIS aerosol parameters over more than 100 validation sites spread around the globe. Corresponding statistics are also computed from temporal subsets of AERONET-derived aerosol data. The means and standard deviations of identical parameters from MOMS and AERONET are compared. Although, their means compare favorably, their standard deviations reveal some influence of surface effects on the MODIS aerosol retrievals over land, especially at low aerosol loading. The direction and rate of spatial variation from MODIS are used to study the spatial distribution of aerosols at various locations either individually or comparatively. This paper introduces the methodology for generating and analyzing the data sets used by the two MODIS aerosol validation papers in this issue.
Gehlen, Heidrun; Bradaric, Zrinkja
2013-01-01
The evaluation of plasma ACTH and the dexamethasone suppression test are considered the methods of choice to evaluate the course of therapy of pituitary pars intermedia dysfunction (PPID). Sampling protocols as well as vacutainers for analysis differ between the laboratories. To evaluate the reproducability of plasma ACTH measurement between four different laboratories (A, B, C, D) in Germany as well as within the laboratories themselves, ten horses with previously diagnosed PPID and four healthy horses were sampled and analyzed. Each laboratory received two differently labeled samples of each horse which had been drawn at the same time (blinded samples). Sampling was performed in the morning at the same time. The sampling vacutainers (with and without addition of coagulation and proteinase inhibitors) and postage of the samples was performed according to laboratory standards. In one laboratory the influence of the time of centrifugation (immediately after taking blood versus after one hour) was determined. The samples were processed and analyzed according to laboratory protocols. Determination of ACTH levels was performed using chemiluminescence immunoassay. In total 132 blood samples were analyzed. The results of doubled blood samples of the same horse showed a standard deviation ranging from +/- 6 to +/- 27 pg/ml within the laboratories (Ø 19,29 pg/ml). The standard deviation of the repeatability of the variation coefficient was 13,48%. Blood samples of the same horse resulted in ACTH levels of 121 pg/ml in the first probe and in < 5 pg/ml in the second probe. Standard deviation of measured ACTH values between the laboratories was +/- 26,4 pg/ml (Ø 27,44 pg/ml). The standard deviation of the reproducibility of the variation coefficient was 18,36%. In a 20 year old gelding the lowest ACTH value was 60.9 pg/ml whereas the highest measured value was 108 pg/ml. Immediate centrifugation of blood samples resulted in significantly higher ACTH values at an average of 11.6 pg/ml. The additional use of proteinase inhibitors (aprotinine) showed no influence on ACTH levels in this study.
NASA Astrophysics Data System (ADS)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-01
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness all quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the lognormal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-17
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less
Poussaint, Tina Young; Vajapeyam, Sridhar; Ricci, Kelsey I.; Panigrahy, Ashok; Kocak, Mehmet; Kun, Larry E.; Boyett, James M.; Pollack, Ian F.; Fouladi, Maryam
2016-01-01
Background Diffuse intrinsic pontine glioma (DIPG) is associated with poor survival regardless of therapy. We used volumetric apparent diffusion coefficient (ADC) histogram metrics to determine associations with progression-free survival (PFS) and overall survival (OS) at baseline and after radiation therapy (RT). Methods Baseline and post-RT quantitative ADC histograms were generated from fluid-attenuated inversion recovery (FLAIR) images and enhancement regions of interest. Metrics assessed included number of peaks (ie, unimodal or bimodal), mean and median ADC, standard deviation, mode, skewness, and kurtosis. Results Based on FLAIR images, the majority of tumors had unimodal peaks with significantly shorter average survival. Pre-RT FLAIR mean, mode, and median values were significantly associated with decreased risk of progression; higher pre-RT ADC values had longer PFS on average. Pre-RT FLAIR skewness and standard deviation were significantly associated with increased risk of progression; higher pre-RT FLAIR skewness and standard deviation had shorter PFS. Nonenhancing tumors at baseline showed higher ADC FLAIR mean values, lower kurtosis, and higher PFS. For enhancing tumors at baseline, bimodal enhancement histograms had much worse PFS and OS than unimodal cases and significantly lower mean peak values. Enhancement in tumors only after RT led to significantly shorter PFS and OS than in patients with baseline or no baseline enhancement. Conclusions ADC histogram metrics in DIPG demonstrate significant correlations between diffusion metrics and survival, with lower diffusion values (increased cellularity), increased skewness, and enhancement associated with shorter survival, requiring future investigations in large DIPG clinical trials. PMID:26487690
Application of snapshot imaging spectrometer in environmental detection
NASA Astrophysics Data System (ADS)
Sun, Kai; Qin, Xiaolei; Zhang, Yu; Wang, Jinqiang
2017-10-01
This study aimed at the application of snapshot imaging spectrometer in environmental detection. The simulated sewage and dyeing wastewater were prepared and the optimal experimental conditions were determined. The white LED array was used as the detection light source and the image of the sample was collected by the imaging spectrometer developed in the laboratory to obtain the spectral information of the sample in the range of 400-800 nm. The standard curve between the absorbance and the concentration of the samples was established. The linear range of a single component of Rhoda mine B was 1-50 mg/L, the linear correlation coefficient was more than 0.99, the recovery was 93%-113% and the relative standard deviations (RSD) was 7.5%. The linear range of chemical oxygen demand (COD) standard solution was 50-900mg/L, the linear correlation coefficient was 0.981, the recovery was 91% -106% and the relative standard deviation (RSD) was 6.7%. The rapid, accurate and precise method for detecting dyes showed an excellent promise for on-site and emergency detection in environment. At the request of the proceedings editor, an updated version of this article was published on 17 October 2017. The original version of this article was replaced due to an accidental inversion of Figure 2 and Figure 3. The Figures have been corrected in the updated and republished version.
Jaafar, W M N Wan; Snyder, J E; Min, Gao
2013-05-01
An apparatus for measuring the Seebeck coefficient (α) and electrical resistivity (ρ) was designed to operate under an infrared microscope. A unique feature of this apparatus is its capability of measuring α and ρ of small-dimension (sub-millimeter) samples without the need for microfabrication. An essential part of this apparatus is a four-probe assembly that has one heated probe, which combines the hot probe technique with the Van der Pauw method for "simultaneous" measurements of the Seebeck coefficient and electrical resistivity. The repeatability of the apparatus was investigated over a temperature range of 40 °C-100 °C using a nickel plate as a standard reference. The results show that the apparatus has an uncertainty of ±4.9% for Seebeck coefficient and ±5.0% for electrical resistivity. The standard deviation of the apparatus against a nickel reference sample is -2.43 μVK(-1) (-12.5%) for the Seebeck coefficient and -0.4 μΩ cm (-4.6%) for the electrical resistivity, respectively.
Use of the Budyko Framework to Estimate the Virtual Water Content in Shijiazhuang Plain, North China
NASA Astrophysics Data System (ADS)
Zhang, E.; Yin, X.
2017-12-01
One of the most challenging steps in implementing analysis of virtual water content (VWC) of agricultural crops is how to properly assess the volume of consumptive water use (CWU) for crop production. In practice, CWU is considered equivalent to the crop evapotranspiration (ETc). Following the crop coefficient method, ETc can be calculated under standard or non-standard conditions by multiplying the reference evapotranspiration (ET0) by one or a few coefficients. However, when current crop growing conditions deviate from standard conditions, accurately determining the coefficients under non-standard conditions remains to be a complicated process and requires lots of field experimental data. Based on regional surface water-energy balance, this research integrates the Budyko framework into the traditional crop coefficient approach to simplify the coefficients determination. This new method enables us to assess the volume of agricultural VWC only based on some hydrometeorological data and agricultural statistic data in regional scale. To demonstrate the new method, we apply it to the Shijiazhuang Plain, which is an agricultural irrigation area in the North China Plain. The VWC of winter wheat and summer maize is calculated and we further subdivide VWC into blue and green water components. Compared with previous studies in this study area, VWC calculated by the Budyko-based crop coefficient approach uses less data and agrees well with some of the previous research. It shows that this new method may serve as a more convenient tool for assessing VWC.
In-depth analysis and discussions of water absorption-typed high power laser calorimeter
NASA Astrophysics Data System (ADS)
Wei, Ji Feng
2017-02-01
In high-power and high-energy laser measurement, the absorber materials can be easily destroyed under long-term direct laser irradiation. In order to improve the calorimeter's measuring capacity, a measuring system directly using water flow as the absorber medium was built. The system's basic principles and the designing parameters of major parts were elaborated. The system's measuring capacity, the laser working modes, and the effects of major parameters were analyzed deeply. Moreover, the factors that may affect the accuracy of measurement were analyzed and discussed. The specific control measures and methods were elaborated. The self-calibration and normal calibration experiments show that this calorimeter has very high accuracy. In electrical calibration, the average correction coefficient is only 1.015, with standard deviation of only 0.5%. In calibration experiments, the standard deviation relative to a middle-power standard calorimeter is only 1.9%.
Associations between heterozygosity and growth rate variables in three western forest trees
Jeffry B. Milton; Peggy Knowles; Kareen B. Sturgeon; Yan B. Linhart; Martha Davis
1981-01-01
For each of three species, quaking aspen, ponderosa pine, and lodgepole pine, we determined the relationships between a ranking of heterozygosity of individuals and measures of growth rate. Genetic variation was assayed by starch gel electrophoresis of enzymes. Growth rates were characterized by the mean, standard deviation, logarithm of the variance, and coefficient...
ERIC Educational Resources Information Center
Barchard, Kimberly A.
2012-01-01
This article introduces new statistics for evaluating score consistency. Psychologists usually use correlations to measure the degree of linear relationship between 2 sets of scores, ignoring differences in means and standard deviations. In medicine, biology, chemistry, and physics, a more stringent criterion is often used: the extent to which…
Revert Ventura, A J; Sanz Requena, R; Martí-Bonmatí, L; Pallardó, Y; Jornet, J; Gaspar, C
2014-01-01
To study whether the histograms of quantitative parameters of perfusion in MRI obtained from tumor volume and peritumor volume make it possible to grade astrocytomas in vivo. We included 61 patients with histological diagnoses of grade II, III, or IV astrocytomas who underwent T2*-weighted perfusion MRI after intravenous contrast agent injection. We manually selected the tumor volume and peritumor volume and quantified the following perfusion parameters on a voxel-by-voxel basis: blood volume (BV), blood flow (BF), mean transit time (TTM), transfer constant (K(trans)), washout coefficient, interstitial volume, and vascular volume. For each volume, we obtained the corresponding histogram with its mean, standard deviation, and kurtosis (using the standard deviation and kurtosis as measures of heterogeneity) and we compared the differences in each parameter between different grades of tumor. We also calculated the mean and standard deviation of the highest 10% of values. Finally, we performed a multiparametric discriminant analysis to improve the classification. For tumor volume, we found statistically significant differences among the three grades of tumor for the means and standard deviations of BV, BF, and K(trans), both for the entire distribution and for the highest 10% of values. For the peritumor volume, we found no significant differences for any parameters. The discriminant analysis improved the classification slightly. The quantification of the volume parameters of the entire region of the tumor with BV, BF, and K(trans) is useful for grading astrocytomas. The heterogeneity represented by the standard deviation of BF is the most reliable diagnostic parameter for distinguishing between low grade and high grade lesions. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.
Do physiotherapy staff record treatment time accurately? An observational study.
Bagley, Pam; Hudson, Mary; Green, John; Forster, Anne; Young, John
2009-09-01
To assess the reliability of duration of treatment time measured by physiotherapy staff in early-stage stroke patients. Comparison of physiotherapy staff's recording of treatment sessions and video recording. Rehabilitation stroke unit in a general hospital. Thirty-nine stroke patients without trunk control or who were unable to stand with an erect trunk without the support of two therapists recruited to a randomized trial evaluating the Oswestry Standing Frame. Twenty-six physiotherapy staff who were involved in patient treatment. Contemporaneous recording by physiotherapy staff of treatment time (in minutes) compared with video recording. Intraclass correlation with 95% confidence interval and the Bland and Altman method for assessing agreement by calculating the mean difference (standard deviation; 95% confidence interval), reliability coefficient and 95% limits of agreement for the differences between the measurements. The mean duration (standard deviation, SD) of treatment time recorded by physiotherapy staff was 32 (11) minutes compared with 25 (9) minutes as evidenced in the video recording. The mean difference (SD) was -6 (9) minutes (95% confidence interval (CI) -9 to -3). The reliability coefficient was 18 minutes and the 95% limits of agreement were -24 to 12 minutes. Intraclass correlation coefficient for agreement between the two methods was 0.50 (95% CI 0.12 to 0.73). Physiotherapy staff's recording of duration of treatment time was not reliable and was systematically greater than the video recording.
van den Besselaar, A M H P; Chantarangkul, V; Angeloni, F; Binder, N B; Byrne, M; Dauer, R; Gudmundsdottir, B R; Jespersen, J; Kitchen, S; Legnani, C; Lindahl, T L; Manning, R A; Martinuzzo, M; Panes, O; Pengo, V; Riddell, A; Subramanian, S; Szederjesi, A; Tantanate, C; Herbel, P; Tripodi, A
2018-01-01
Essentials Two candidate International Standards for thromboplastin (coded RBT/16 and rTF/16) are proposed. International Sensitivity Index (ISI) of proposed standards was assessed in a 20-centre study. The mean ISI for RBT/16 was 1.21 with a between-centre coefficient of variation of 4.6%. The mean ISI for rTF/16 was 1.11 with a between-centre coefficient of variation of 5.7%. Background The availability of International Standards for thromboplastin is essential for the calibration of routine reagents and hence the calculation of the International Normalized Ratio (INR). Stocks of the current Fourth International Standards are running low. Candidate replacement materials have been prepared. This article describes the calibration of the proposed Fifth International Standards for thromboplastin, rabbit, plain (coded RBT/16) and for thromboplastin, recombinant, human, plain (coded rTF/16). Methods An international collaborative study was carried out for the assignment of International Sensitivity Indexes (ISIs) to the candidate materials, according to the World Health Organization (WHO) guidelines for thromboplastins and plasma used to control oral anticoagulant therapy with vitamin K antagonists. Results Results were obtained from 20 laboratories. In several cases, deviations from the ISI calibration model were observed, but the average INR deviation attributabled to the model was not greater than 10%. Only valid ISI assessments were used to calculate the mean ISI for each candidate. The mean ISI for RBT/16 was 1.21 (between-laboratory coefficient of variation [CV]: 4.6%), and the mean ISI for rTF/16 was 1.11 (between-laboratory CV: 5.7%). Conclusions The between-laboratory variation of the ISI for candidate material RBT/16 was similar to that of the Fourth International Standard (RBT/05), and the between-laboratory variation of the ISI for candidate material rTF/16 was slightly higher than that of the Fourth International Standard (rTF/09). The candidate materials have been accepted by WHO as the Fifth International Standards for thromboplastin, rabbit plain, and thromboplastin, recombinant, human, plain. © 2017 International Society on Thrombosis and Haemostasis.
NASA Astrophysics Data System (ADS)
Petrishcheva, E.; Abart, R.
2012-04-01
We address mathematical modeling and computer simulations of phase decomposition in a multicomponent system. As opposed to binary alloys with one common diffusion parameter, our main concern is phase decomposition in real geological systems under influence of strongly different interdiffusion coefficients, as it is frequently encountered in mineral solid solutions with coupled diffusion on different sub-lattices. Our goal is to explain deviations from equilibrium element partitioning which are often observed in nature, e.g., in a cooled ternary feldspar. To this end we first adopt the standard Cahn-Hilliard model to the multicomponent diffusion problem and account for arbitrary diffusion coefficients. This is done by using Onsager's approach such that flux of each component results from the combined action of chemical potentials of all components. In a second step the generalized Cahn-Hilliard equation is solved numerically using finite-elements approach. We introduce and investigate several decomposition scenarios that may produce systematic deviations from the equilibrium element partitioning. Both ideal solutions and ternary feldspar are considered. Typically, the slowest component is initially "frozen" and the decomposition effectively takes place only for two "fast" components. At this stage the deviations from the equilibrium element partitioning are indeed observed. These deviations may became "frozen" under conditions of cooling. The final equilibration of the system occurs on a considerably slower time scale. Therefore the system may indeed remain unaccomplished at the observation point. Our approach reveals the intrinsic reasons for the specific phase separation path and rigorously describes it by direct numerical solution of the generalized Cahn-Hilliard equation.
Choi, Young Jun; Lee, Jeong Hyun; Kim, Hye Ok; Kim, Dae Yoon; Yoon, Ra Gyoung; Cho, So Hyun; Koh, Myeong Ju; Kim, Namkug; Kim, Sang Yoon; Baek, Jung Hwan
2016-01-01
To explore the added value of histogram analysis of apparent diffusion coefficient (ADC) values over magnetic resonance (MR) imaging and fluorine 18 ((18)F) fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) for the detection of occult palatine tonsil squamous cell carcinoma (SCC) in patients with cervical nodal metastasis from a cancer of an unknown primary site. The institutional review board approved this retrospective study, and the requirement for informed consent was waived. Differences in the bimodal histogram parameters of the ADC values were assessed among occult palatine tonsil SCC (n = 19), overt palatine tonsil SCC (n = 20), and normal palatine tonsils (n = 20). One-way analysis of variance was used to analyze differences among the three groups. Receiver operating characteristic curve analysis was used to determine the best differentiating parameters. The increased sensitivity of histogram analysis over MR imaging and (18)F-FDG PET/CT for the detection of occult palatine tonsil SCC was evaluated as added value. Histogram analysis showed statistically significant differences in the mean, standard deviation, and 50th and 90th percentile ADC values among the three groups (P < .0045). Occult palatine tonsil SCC had a significantly higher standard deviation for the overall curves, mean and standard deviation of the higher curves, and 90th percentile ADC value, compared with normal palatine tonsils (P < .0167). Receiver operating characteristic curve analysis showed that the standard deviation of the overall curve best delineated occult palatine tonsil SCC from normal palatine tonsils, with a sensitivity of 78.9% (15 of 19 patients) and a specificity of 60% (12 of 20 patients). The added value of ADC histogram analysis was 52.6% over MR imaging alone and 15.8% over combined conventional MR imaging and (18)F-FDG PET/CT. Adding ADC histogram analysis to conventional MR imaging can improve the detection sensitivity for occult palatine tonsil SCC in patients with a cervical nodal metastasis originating from a cancer of an unknown primary site. © RSNA, 2015.
High-Throughput RNA Interference Screening: Tricks of the Trade
Nebane, N. Miranda; Coric, Tatjana; Whig, Kanupriya; McKellip, Sara; Woods, LaKeisha; Sosa, Melinda; Sheppard, Russell; Rasmussen, Lynn; Bjornsti, Mary-Ann; White, E. Lucile
2016-01-01
The process of validating an assay for high-throughput screening (HTS) involves identifying sources of variability and developing procedures that minimize the variability at each step in the protocol. The goal is to produce a robust and reproducible assay with good metrics. In all good cell-based assays, this means coefficient of variation (CV) values of less than 10% and a signal window of fivefold or greater. HTS assays are usually evaluated using Z′ factor, which incorporates both standard deviation and signal window. A Z′ factor value of 0.5 or higher is acceptable for HTS. We used a standard HTS validation procedure in developing small interfering RNA (siRNA) screening technology at the HTS center at Southern Research. Initially, our assay performance was similar to published screens, with CV values greater than 10% and Z′ factor values of 0.51 ± 0.16 (average ± standard deviation). After optimizing the siRNA assay, we got CV values averaging 7.2% and a robust Z′ factor value of 0.78 ± 0.06 (average ± standard deviation). We present an overview of the problems encountered in developing this whole-genome siRNA screening program at Southern Research and how equipment optimization led to improved data quality. PMID:23616418
Uncertainty Analysis of Downscaled CMIP5 Precipitation Data for Louisiana, USA
NASA Astrophysics Data System (ADS)
Sumi, S. J.; Tamanna, M.; Chivoiu, B.; Habib, E. H.
2014-12-01
The downscaled CMIP3 and CMIP5 Climate and Hydrology Projections dataset contains fine spatial resolution translations of climate projections over the contiguous United States developed using two downscaling techniques (monthly Bias Correction Spatial Disaggregation (BCSD) and daily Bias Correction Constructed Analogs (BCCA)). The objective of this study is to assess the uncertainty of the CMIP5 downscaled general circulation models (GCM). We performed an analysis of the daily, monthly, seasonal and annual variability of precipitation downloaded from the Downscaled CMIP3 and CMIP5 Climate and Hydrology Projections website for the state of Louisiana, USA at 0.125° x 0.125° resolution. A data set of daily gridded observations of precipitation of a rectangular boundary covering Louisiana is used to assess the validity of 21 downscaled GCMs for the 1950-1999 period. The following statistics are computed using the CMIP5 observed dataset with respect to the 21 models: the correlation coefficient, the bias, the normalized bias, the mean absolute error (MAE), the mean absolute percentage error (MAPE), and the root mean square error (RMSE). A measure of variability simulated by each model is computed as the ratio of its standard deviation, in both space and time, to the corresponding standard deviation of the observation. The correlation and MAPE statistics are also computed for each of the nine climate divisions of Louisiana. Some of the patterns that we observed are: 1) Average annual precipitation rate shows similar spatial distribution for all the models within a range of 3.27 to 4.75 mm/day from Northwest to Southeast. 2) Standard deviation of summer (JJA) precipitation (mm/day) for the models maintains lower value than the observation whereas they have similar spatial patterns and range of values in winter (NDJ). 3) Correlation coefficients of annual precipitation of models against observation have a range of -0.48 to 0.36 with variable spatial distribution by model. 4) Most of the models show negative correlation coefficients in summer and positive in winter. 5) MAE shows similar spatial distribution for all the models within a range of 5.20 to 7.43 mm/day from Northwest to Southeast of Louisiana. 6) Highest values of correlation coefficients are found at seasonal scale within a range of 0.36 to 0.46.
Kim, Younggy; Walker, W Shane; Lawler, Desmond F
2012-05-01
In electrodialysis desalination, the boundary layer near ion-exchange membranes is the limiting region for the overall rate of ionic separation due to concentration polarization over tens of micrometers in that layer. Under high current conditions, this sharp concentration gradient, creating substantial ionic diffusion, can drive a preferential separation for certain ions depending on their concentration and diffusivity in the solution. Thus, this study tested a hypothesis that the boundary layer affects the competitive transport between di- and mono-valent cations, which is known to be governed primarily by the partitioning with cation-exchange membranes. A laboratory-scale electrodialyzer was operated at steady state with a mixture of 10mM KCl and 10mM CaCl(2) at various flow rates. Increased flows increased the relative calcium transport. A two-dimensional model was built with analytical solutions of the Nernst-Planck equation. In the model, the boundary layer thickness was considered as a random variable defined with three statistical parameters: mean, standard deviation, and correlation coefficient between the thicknesses of the two boundary layers facing across a spacer. Model simulations with the Monte Carlo method found that a greater calcium separation was achieved with a smaller mean, greater standard deviation, or more negative correlation coefficient. The model and experimental results were compared for the cationic transport number as well as the current and potential relationship. The mean boundary layer thickness was found to decrease from 40 to less than 10 μm as the superficial water velocity increased from 1.06 to 4.24 cm/s. The standard deviation was greater than the mean thickness at slower water velocities and smaller at faster water velocities. Copyright © 2012 Elsevier Ltd. All rights reserved.
Miyoshi, Toru; Suetsuna, Ryoji; Tokunaga, Naoto; Kusaka, Masayasu; Tsuzaki, Ryuichiro; Koten, Kazuya; Kunihisa, Kohno; Ito, Hiroshi
2017-07-01
The blood pressure variability (BPV) such as visit-to-visit, day-by-day, and ambulatory BPV has been also shown to be a risk of future cardiovascular events. However, the effects of antihypertensive therapy on BPV remain unclear. The purpose of this study was to evaluate the effect of azilsartan after switching from another angiotensin II receptor blocker (ARB) on day-to-day BPV in home BP monitoring. This prospective, multicenter, open-labeled, single-arm study included 28 patients undergoing treatment with an ARB, which was switched to azilsartan after enrollment. The primary outcome was the change in the mean of the standard deviation and the coefficient of variation of morning home BP for 5 consecutive days from baseline to the 24-week follow-up. The secondary outcome was the change in arterial stiffness measured by the cardio-ankle vascular index. The mean BPs in the morning and evening for 5 days did not statistically differ between baseline and 24 weeks. For the morning BP, the means of the standard deviations and coefficient of variation of the systolic BP were significantly decreased from 7.4 ± 3.6 mm Hg to 6.1 ± 3.2 mm Hg and from 5.4±2.7% to 4.6±2.3% (mean ± standard deviation, P = 0.04 and P = 0.04, respectively). For the evening BP, no significant change was observed in the systolic or diastolic BPV. The cardio-ankle vascular index significantly decreased from 8.3 ± 0.8 to 8.1 ± 0.8 (P = 0.03). Switching from another ARB to azilsartan reduced day-to-day BPV in the morning and improved arterial stiffness.
Miyoshi, Toru; Suetsuna, Ryoji; Tokunaga, Naoto; Kusaka, Masayasu; Tsuzaki, Ryuichiro; Koten, Kazuya; Kunihisa, Kohno; Ito, Hiroshi
2017-01-01
Background The blood pressure variability (BPV) such as visit-to-visit, day-by-day, and ambulatory BPV has been also shown to be a risk of future cardiovascular events. However, the effects of antihypertensive therapy on BPV remain unclear. The purpose of this study was to evaluate the effect of azilsartan after switching from another angiotensin II receptor blocker (ARB) on day-to-day BPV in home BP monitoring. Methods This prospective, multicenter, open-labeled, single-arm study included 28 patients undergoing treatment with an ARB, which was switched to azilsartan after enrollment. The primary outcome was the change in the mean of the standard deviation and the coefficient of variation of morning home BP for 5 consecutive days from baseline to the 24-week follow-up. The secondary outcome was the change in arterial stiffness measured by the cardio-ankle vascular index. Results The mean BPs in the morning and evening for 5 days did not statistically differ between baseline and 24 weeks. For the morning BP, the means of the standard deviations and coefficient of variation of the systolic BP were significantly decreased from 7.4 ± 3.6 mm Hg to 6.1 ± 3.2 mm Hg and from 5.4±2.7% to 4.6±2.3% (mean ± standard deviation, P = 0.04 and P = 0.04, respectively). For the evening BP, no significant change was observed in the systolic or diastolic BPV. The cardio-ankle vascular index significantly decreased from 8.3 ± 0.8 to 8.1 ± 0.8 (P = 0.03). Conclusions Switching from another ARB to azilsartan reduced day-to-day BPV in the morning and improved arterial stiffness. PMID:28611863
Empirical Model of Precipitating Ion Oval
NASA Astrophysics Data System (ADS)
Goldstein, Jerry
2017-10-01
In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.
Feingold, Alan
2009-01-01
The use of growth-modeling analysis (GMA)--including Hierarchical Linear Models, Latent Growth Models, and General Estimating Equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the intervention and control groups that captures the treatment effect is rarely reported. This article first reviews two classes of formulas for effect sizes associated with classical repeated-measures designs that use the standard deviation of either change scores or raw scores for the denominator. It then broadens the scope to subsume GMA, and demonstrates that the independent groups, within-subjects, pretest-posttest control-group, and GMA designs all estimate the same effect size when the standard deviation of raw scores is uniformly used. Finally, it is shown that the correct effect size for treatment efficacy in GMA--the difference between the estimated means of the two groups at end of study (determined from the coefficient for the slope difference and length of study) divided by the baseline standard deviation--is not reported in clinical trials. PMID:19271847
Danel, J-F; Kazandjian, L; Zérah, G
2012-06-01
Computations of the self-diffusion coefficient and viscosity in warm dense matter are presented with an emphasis on obtaining numerical convergence and a careful evaluation of the standard deviation. The transport coefficients are computed with the Green-Kubo relation and orbital-free molecular dynamics at the Thomas-Fermi-Dirac level. The numerical parameters are varied until the Green-Kubo integral is equal to a constant in the t→+∞ limit; the transport coefficients are deduced from this constant and not by extrapolation of the Green-Kubo integral. The latter method, which gives rise to an unknown error, is tested for the computation of viscosity; it appears that it should be used with caution. In the large domain of coupling constant considered, both the self-diffusion coefficient and viscosity turn out to be well approximated by simple analytical laws using a single effective atomic number calculated in the average-atom model.
NASA Astrophysics Data System (ADS)
Danel, J.-F.; Kazandjian, L.; Zérah, G.
2012-06-01
Computations of the self-diffusion coefficient and viscosity in warm dense matter are presented with an emphasis on obtaining numerical convergence and a careful evaluation of the standard deviation. The transport coefficients are computed with the Green-Kubo relation and orbital-free molecular dynamics at the Thomas-Fermi-Dirac level. The numerical parameters are varied until the Green-Kubo integral is equal to a constant in the t→+∞ limit; the transport coefficients are deduced from this constant and not by extrapolation of the Green-Kubo integral. The latter method, which gives rise to an unknown error, is tested for the computation of viscosity; it appears that it should be used with caution. In the large domain of coupling constant considered, both the self-diffusion coefficient and viscosity turn out to be well approximated by simple analytical laws using a single effective atomic number calculated in the average-atom model.
Rahimy, Ehsan; Reddy, Sahitya; DeCroos, Francis Char; Khan, M Ali; Boyer, David S; Gupta, Omesh P; Regillo, Carl D; Haller, Julia A
2015-08-01
To evaluate the visual acuity agreement between a standard back-illuminated Early Treatment Diabetic Retinopathy Study (ETDRS) chart and a handheld internally illuminated ETDRS chart. Two-center prospective study. Seventy patients (134 eyes) with retinal pathology were enrolled between October 2012 and August 2013. Visual acuity was measured using both the ETDRS chart and the handheld device by masked independent examiners after best protocol refraction. Examination was performed in the same room under identical illumination and testing conditions. The mean number of letters seen was 63.0 (standard deviation: 19.8 letters) and 61.2 letters (standard deviation: 19.1 letters) for the ETDRS chart and handheld device, respectively. Mean difference per eye between the ETDRS and handheld device was 1.8 letters. A correlation coefficient (r) of 0.95 demonstrated a positive linear correlation between ETDRS chart and handheld device measured acuities. Intraclass correlation coefficient was performed to assess the reproducibility of the measurements made by different observers measuring the same quantity and was calculated to be 0.95 (95% confidence interval: 0.93-0.96). Agreement was independent of retinal disease. The strong correlation between measured visual acuity using the ETDRS and handheld equivalent suggests that they may be used interchangeably, with accurate measurements. Potential benefits of this device include convenience and portability, as well as the ability to assess ETDRS visual acuity without a dedicated testing lane.
Effect of stress on energy flux deviation of ultrasonic waves in GR/EP composites
NASA Technical Reports Server (NTRS)
Prosser, William H.; Kriz, R. D.; Fitting, Dale W.
1990-01-01
Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis (fiber axis) and the x1 for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers a new nondestructive technique of evaluating stress in composites.
On the variability of the Priestley-Taylor coefficient over water bodies
NASA Astrophysics Data System (ADS)
Assouline, Shmuel; Li, Dan; Tyler, Scott; Tanny, Josef; Cohen, Shabtai; Bou-Zeid, Elie; Parlange, Marc; Katul, Gabriel G.
2016-01-01
Deviations in the Priestley-Taylor (PT) coefficient αPT from its accepted 1.26 value are analyzed over large lakes, reservoirs, and wetlands where stomatal or soil controls are minimal or absent. The data sets feature wide variations in water body sizes and climatic conditions. Neither surface temperature nor sensible heat flux variations alone, which proved successful in characterizing αPT variations over some crops, explain measured deviations in αPT over water. It is shown that the relative transport efficiency of turbulent heat and water vapor is key to explaining variations in αPT over water surfaces, thereby offering a new perspective over the concept of minimal advection or entrainment introduced by PT. Methods that allow the determination of αPT based on low-frequency sampling (i.e., 0.1 Hz) are then developed and tested, which are usable with standard meteorological sensors that filter some but not all turbulent fluctuations. Using approximations to the Gram determinant inequality, the relative transport efficiency is derived as a function of the correlation coefficient between temperature and water vapor concentration fluctuations (RTq). The proposed approach reasonably explains the measured deviations from the conventional αPT = 1.26 value even when RTq is determined from air temperature and water vapor concentration time series that are Gaussian-filtered and subsampled to a cutoff frequency of 0.1 Hz. Because over water bodies, RTq deviations from unity are often associated with advection and/or entrainment, linkages between αPT and RTq offer both a diagnostic approach to assess their significance and a prognostic approach to correct the 1.26 value when using routine meteorological measurements of temperature and humidity.
ERIC Educational Resources Information Center
Elmenfi, Fadil; Gaibani, Ahmed
2016-01-01
This study investigates the effect of social evaluation on Public Speaking Anxiety of English foreign language learners at Omar Al-Mukhtar University in Libya. A random sample of 111 students was used in the study. To analyse the collected data, means, standard deviations, a three-way ANOVA analysis, and the correlation coefficients were used with…
Oshorov, A V; Popugaev, K A; Savin, I A; Potapov, A A
2016-01-01
"Standard" assessment of ICP by measuring liquor ventricular pressure recently questioned. THE OBJECTIVE OF THE STUDY: Compare the values of ventricular and parenchymal ICP against the closure of open liquor drainage and during active CSF drainage. Examined 7 patients with TBI and intracranial hypertension syndrome, GCS 5.6 ± 1.2 points, 4.2 ± age 33 years. Compared parenchymal and ventricular ICP in three time periods: 1--during closure of ventricular drainage, 2--during of the open drains and drainage at the level of 14-15 mmHg, 3--during the period of active drainage. When comparing two methods of measurement used Bland-Altman method. 1. During time period of the closed drainage correlation coefficient was r = 0.83, p < 0.001. Bland-Altman method: the difference of the two measurements is equal to the minimum and 0.7 mm Hg, the standard deviation of 2.02 mm Hg 2. During time period of the open drainage was reduction of the correlation coefficient to r = 0.46, p < 0.01. Bland-Altman method: an increase in the difference of the two measurements to -0.84 mmHg, standard deviation 2.8 mm Hg 3. During time period of the active drainage of cerebrospinal fluid was marked difference between methods of measurement. Bland-Altman method: the difference was 8.64 mm Hg, and a standard deviation of 2.6 mm Hg. 1. During the closure of the ventricular drainage were good correlation between ventricular and parenchymal ICR 2. During open the liquor drainage correlation between the two methods of measuring the intracranial pressure is reduced. 3. During the active CSF drainage correlation between the two methods of measuring intracranial pressure can be completely lost. Under these conditions, CSF pressure is not correctly reflect the ICP 4. For an accurate and continuous measurement of intracranial pressure on the background of the active CSF drainage should be carried out simultaneous parenchymal ICP measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, B. R., E-mail: Bryan.Muir@nrc-cnrc.gc.ca
2015-04-15
Purpose: To analyze absorbed dose calibration coefficients, N{sub D,w}, measured at accredited dosimetry calibration laboratories (ADCLs) for client ionization chambers to study (i) variability among N{sub D,w} coefficients for chambers of the same type calibrated at each ADCL to investigate ion chamber volume fluctuations and chamber manufacturing tolerances; (ii) equivalency of ion chamber calibration coefficients measured at different ADCLs by intercomparing N{sub D,w} coefficients for chambers of the same type; and (iii) the long-term stability of N{sub D,w} coefficients for different chamber types by investigating repeated chamber calibrations. Methods: Large samples of N{sub D,w} coefficients for several chamber types measuredmore » over the time period between 1998 and 2014 were obtained from the three ADCLs operating in the United States. These are analyzed using various graphical and numerical statistical tests for the four chamber types with the largest samples of calibration coefficients to investigate (i) and (ii) above. Ratios of calibration coefficients for the same chamber, typically obtained two years apart, are calculated to investigate (iii) above and chambers with standard deviations of old/new ratios less than 0.3% meet stability requirements for accurate reference dosimetry recommended in dosimetry protocols. Results: It is found that N{sub D,w} coefficients for a given chamber type compared among different ADCLs may arise from differing probability distributions potentially due to slight differences in calibration procedures and/or the transfer of the primary standard. However, average N{sub D,w} coefficients from different ADCLs for given chamber types are very close with percent differences generally less than 0.2% for Farmer-type chambers and are well within reported uncertainties. Conclusions: The close agreement among calibrations performed at different ADCLs reaffirms the Calibration Laboratory Accreditation Subcommittee process of ensuring ADCL conformance with National Institute of Standards and Technology standards. This study shows that N{sub D,w} coefficients measured at different ADCLs are statistically equivalent, especially considering reasonable uncertainties. This analysis of N{sub D,w} coefficients also allows identification of chamber types that can be considered stable enough for accurate reference dosimetry.« less
NASA Technical Reports Server (NTRS)
Prosser, William H.; Kriz, R. D.; Fitting, Dale W.
1990-01-01
Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis fiber axis) and the x1 axis for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers new nondestructive technique of evaluating stress in composites.
Thomsen, Felix Sebastian Leo; Delrieux, Claudio Augusto; de Luis-García, Rodrigo
2017-03-01
Descriptors extracted from magnetic resonance imaging (MRI) of the brain can be employed to locate and characterize a wide range of pathologies. Scalar measures are typically derived within a single-voxel unit, but neighborhood-based texture measures can also be applied. In this work, we propose a new set of descriptors to compute local texture characteristics from scalar measures of diffusion tensor imaging (DTI), such as mean and radial diffusivity, and fractional anisotropy. We employ weighted rotational invariant local operators, namely standard deviation, inter-quartile range, coefficient of variation, quartile coefficient of variation and skewness. Sensitivity and specificity of those texture descriptors were analyzed with tract-based spatial statistics of the white matter on a diffusion MRI group study of elderly healthy controls, patients with mild cognitive impairment (MCI), and mild or moderate Alzheimer's disease (AD). In addition, robustness against noise has been assessed with a realistic diffusion-weighted imaging phantom and the contamination of the local neighborhood with gray matter has been measured. The new texture operators showed an increased ability for finding formerly undetected differences between groups compared to conventional DTI methods. In particular, the coefficient of variation, quartile coefficient of variation, standard deviation and inter-quartile range of the mean and radial diffusivity detected significant differences even between previously not significantly discernible groups, such as MCI versus moderate AD and mild versus moderate AD. The analysis provided evidence of low contamination of the local neighborhood with gray matter and high robustness against noise. The local operators applied here enhance the identification and localization of areas of the brain where cognitive impairment takes place and thus indicate them as promising extensions in diffusion MRI group studies.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Liu, Songde; Smith, Zach; Xu, Ronald X.
2016-10-01
There is a pressing need for a phantom standard to calibrate medical optical devices. However, 3D printing of tissue-simulating phantom standard is challenged by lacking of appropriate methods to characterize and reproduce surface topography and optical properties accurately. We have developed a structured light imaging system to characterize surface topography and optical properties (absorption coefficient and reduced scattering coefficient) of 3D tissue-simulating phantoms. The system consisted of a hyperspectral light source, a digital light projector (DLP), a CMOS camera, two polarizers, a rotational stage, a translation stage, a motion controller, and a personal computer. Tissue-simulating phantoms with different structural and optical properties were characterized by the proposed imaging system and validated by a standard integrating sphere system. The experimental results showed that the proposed system was able to achieve pixel-level optical properties with a percentage error of less than 11% for absorption coefficient and less than 7% for reduced scattering coefficient for phantoms without surface curvature. In the meanwhile, 3D topographic profile of the phantom can be effectively reconstructed with an accuracy of less than 1% deviation error. Our study demonstrated that the proposed structured light imaging system has the potential to characterize structural profile and optical properties of 3D tissue-simulating phantoms.
Peterson, Leif E
2002-01-01
CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816
Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Shin, Min-Ki; Moon, Gui-Im; Hong, Jin-Hwan; Kim, Hae-Yeong
2017-07-01
One novel standard reference plasmid, namely pUC-RICE5, was constructed as a positive control and calibrator for event-specific qualitative and quantitative detection of genetically modified (GM) rice (Bt63, Kemingdao1, Kefeng6, Kefeng8, and LLRice62). pUC-RICE5 contained fragments of a rice-specific endogenous reference gene (sucrose phosphate synthase) as well as the five GM rice events. An existing qualitative PCR assay approach was modified using pUC-RICE5 to create a quantitative method with limits of detection correlating to approximately 1-10 copies of rice haploid genomes. In this quantitative PCR assay, the square regression coefficients ranged from 0.993 to 1.000. The standard deviation and relative standard deviation values for repeatability ranged from 0.02 to 0.22 and 0.10% to 0.67%, respectively. The Ministry of Food and Drug Safety (Korea) validated the method and the results suggest it could be used routinely to identify five GM rice events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Accuracy of a pulse-coherent acoustic Doppler profiler in a wave-dominated flow
Lacy, J.R.; Sherwood, C.R.
2004-01-01
The accuracy of velocities measured by a pulse-coherent acoustic Doppler profiler (PCADP) in the bottom boundary layer of a wave-dominated inner-shelf environment is evaluated. The downward-looking PCADP measured velocities in eight 10-cm cells at 1 Hz. Velocities measured by the PCADP are compared to those measured by an acoustic Doppler velocimeter for wave orbital velocities up to 95 cm s-1 and currents up to 40 cm s-1. An algorithm for correcting ambiguity errors using the resolution velocities was developed. Instrument bias, measured as the average error in burst mean speed, is -0.4 cm s-1 (standard deviation = 0.8). The accuracy (root-mean-square error) of instantaneous velocities has a mean of 8.6 cm s-1 (standard deviation = 6.5) for eastward velocities (the predominant direction of waves), 6.5 cm s-1 (standard deviation = 4.4) for northward velocities, and 2.4 cm s-1 (standard deviation = 1.6) for vertical velocities. Both burst mean and root-mean-square errors are greater for bursts with ub ??? 50 cm s-1. Profiles of burst mean speeds from the bottom five cells were fit to logarithmic curves: 92% of bursts with mean speed ??? 5 cm s-1 have a correlation coefficient R2 > 0.96. In cells close to the transducer, instantaneous velocities are noisy, burst mean velocities are biased low, and bottom orbital velocities are biased high. With adequate blanking distances for both the profile and resolution velocities, the PCADP provides sufficient accuracy to measure velocities in the bottom boundary layer under moderately energetic inner-shelf conditions.
Rakszegi, Marianna; Löschenberger, Franziska; Hiltbrunner, Jürg; Vida, Gyula; Mikó, Péter
2016-06-01
An assessment was previously made of the effects of organic and low-input field management systems on the physical, grain compositional and processing quality of wheat and on the performance of varieties developed using different breeding methods ("Comparison of quality parameters of wheat varieties with different breeding origin under organic and low-input conventional conditions" [1]). Here, accompanying data are provided on the performance and stability analysis of the genotypes using the coefficient of variation and the 'ranking' and 'which-won-where' plots of GGE biplot analysis for the most important quality traits. Broad-sense heritability was also evaluated and is given for the most important physical and quality properties of the seed in organic and low-input management systems, while mean values and standard deviation of the studied properties are presented separately for organic and low-input fields.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Anthropometry for design for the elderly.
Kothiyal, K; Tettey, S
2001-01-01
This paper presents anthropometric data on elderly people in Australia. Data were collected in the metropolitan city of Sydney, NSW, Australia. In all 171 elderly people (males and females, aged 65 years and above) took part in the study. Mean values, standard deviations, medians, range, and coefficients of variation for the various body dimensions were estimated. Correlation coefficients were also calculated to determine the relationship between different body dimensions for the elderly population. The mean stature of elderly Australian males and females were compared with populations from other countries. The paper discusses design implications for elderly people and provides several examples of application of the anthropometric data.
On the variation of the Nimbus 7 total solar irradiance
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1992-01-01
For the interval December 1978 to April 1991, the value of the mean total solar irradiance, as measured by the Nimbus-7 Earth Radiation Budget Experiment channel 10C, was 1,372.02 Wm(exp -2), having a standard deviation of 0.65 Wm(exp -2), a coefficient of variation (mean divided by the standard deviation) of 0.047 percent, and a normal deviate z (a measure of the randomness of the data) of -8.019 (inferring a highly significant non-random variation in the solar irradiance measurements, presumably related to the action of the solar cycle). Comparison of the 12-month moving average (also called the 13-month running mean) of solar irradiance to those of the usual descriptors of the solar cycle (i.e., sunspot number, 10.7-cm solar radio flux, and total corrected sunspot area) suggests possibly significant temporal differences. For example, solar irradiance is found to have been greatest on or before mid 1979 (leading solar maximum for cycle 21), lowest in early 1987 (lagging solar minimum for cycle 22), and was rising again through late 1990 (thus, lagging solar maximum for cycle 22), having last reported values below those that were seen in 1979 (even though cycles 21 and 22 were of comparable strength). Presuming a genuine correlation between solar irradiance and the solar cycle (in particular, sunspot number) one infers that the correlation is weak (having a coefficient of correlation r less than 0.84) and that major excursions (both as 'excesses' and 'deficits') have occurred (about every 2 to 3 years, perhaps suggesting a pulsating Sun).
Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard
2017-11-01
Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Doblas, Sabrina; Almeida, Gilberto S; Blé, François-Xavier; Garteiser, Philippe; Hoff, Benjamin A; McIntyre, Dominick J O; Wachsmuth, Lydia; Chenevert, Thomas L; Faber, Cornelius; Griffiths, John R; Jacobs, Andreas H; Morris, David M; O'Connor, James P B; Robinson, Simon P; Van Beers, Bernard E; Waterton, John C
2015-12-01
To evaluate between-site agreement of apparent diffusion coefficient (ADC) measurements in preclinical magnetic resonance imaging (MRI) systems. A miniaturized thermally stable ice-water phantom was devised. ADC (mean and interquartile range) was measured over several days, on 4.7T, 7T, and 9.4T Bruker, Agilent, and Magnex small-animal MRI systems using a common protocol across seven sites. Day-to-day repeatability was expressed as percent variation of mean ADC between acquisitions. Cross-site reproducibility was expressed as 1.96 × standard deviation of percent deviation of ADC values. ADC measurements were equivalent across all seven sites with a cross-site ADC reproducibility of 6.3%. Mean day-to-day repeatability of ADC measurements was 2.3%, and no site was identified as presenting different measurements than others (analysis of variance [ANOVA] P = 0.02, post-hoc test n.s.). Between-slice ADC variability was negligible and similar between sites (P = 0.15). Mean within-region-of-interest ADC variability was 5.5%, with one site presenting a significantly greater variation than the others (P = 0.0013). Absolute ADC values in preclinical studies are comparable between sites and equipment, provided standardized protocols are employed. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Schlattl, H.; Zankl, M.; Petoussi-Henss, N.
2007-04-01
A new series of organ equivalent dose conversion coefficients for whole body external photon exposure is presented for a standardized couple of human voxel models, called Rex and Regina. Irradiations from broad parallel beams in antero-posterior, postero-anterior, left- and right-side lateral directions as well as from a 360° rotational source have been performed numerically by the Monte Carlo transport code EGSnrc. Dose conversion coefficients from an isotropically distributed source were computed, too. The voxel models Rex and Regina originating from real patient CT data comply in body and organ dimensions with the currently valid reference values given by the International Commission on Radiological Protection (ICRP) for the average Caucasian man and woman, respectively. While the equivalent dose conversion coefficients of many organs are in quite good agreement with the reference values of ICRP Publication 74, for some organs and certain geometries the discrepancies amount to 30% or more. Differences between the sexes are of the same order with mostly higher dose conversion coefficients in the smaller female model. However, much smaller deviations from the ICRP values are observed for the resulting effective dose conversion coefficients. With the still valid definition for the effective dose (ICRP Publication 60), the greatest change appears in lateral exposures with a decrease in the new models of at most 9%. However, when the modified definition of the effective dose as suggested by an ICRP draft is applied, the largest deviation from the current reference values is obtained in postero-anterior geometry with a reduction of the effective dose conversion coefficient by at most 12%.
NASA Astrophysics Data System (ADS)
Chatterjee, R. S.; Singh, Narendra; Thapa, Shailaja; Sharma, Dravneeta; Kumar, Dheeraj
2017-06-01
The present study proposes land surface temperature (LST) retrieval from satellite-based thermal IR data by single channel radiative transfer algorithm using atmospheric correction parameters derived from satellite-based and in-situ data and land surface emissivity (LSE) derived by a hybrid LSE model. For example, atmospheric transmittance (τ) was derived from Terra MODIS spectral radiance in atmospheric window and absorption bands, whereas the atmospheric path radiance and sky radiance were estimated using satellite- and ground-based in-situ solar radiation, geographic location and observation conditions. The hybrid LSE model which is coupled with ground-based emissivity measurements is more versatile than the previous LSE models and yields improved emissivity values by knowledge-based approach. It uses NDVI-based and NDVI Threshold method (NDVITHM) based algorithms and field-measured emissivity values. The model is applicable for dense vegetation cover, mixed vegetation cover, bare earth including coal mining related land surface classes. The study was conducted in a coalfield of India badly affected by coal fire for decades. In a coal fire affected coalfield, LST would provide precise temperature difference between thermally anomalous coal fire pixels and background pixels to facilitate coal fire detection and monitoring. The derived LST products of the present study were compared with radiant temperature images across some of the prominent coal fire locations in the study area by graphical means and by some standard mathematical dispersion coefficients such as coefficient of variation, coefficient of quartile deviation, coefficient of quartile deviation for 3rd quartile vs. maximum temperature, coefficient of mean deviation (about median) indicating significant increase in the temperature difference among the pixels. The average temperature slope between adjacent pixels, which increases the potential of coal fire pixel detection from background pixels, is significantly larger in the derived LST products than the corresponding radiant temperature images.
Standard electrode potential, Tafel equation, and the solvation thermodynamics.
Matyushov, Dmitry V
2009-06-21
Equilibrium in the electronic subsystem across the solution-metal interface is considered to connect the standard electrode potential to the statistics of localized electronic states in solution. We argue that a correct derivation of the Nernst equation for the electrode potential requires a careful separation of the relevant time scales. An equation for the standard metal potential is derived linking it to the thermodynamics of solvation. The Anderson-Newns model for electronic delocalization between the solution and the electrode is combined with a bilinear model of solute-solvent coupling introducing nonlinear solvation into the theory of heterogeneous electron transfer. We therefore are capable of addressing the question of how nonlinear solvation affects electrochemical observables. The transfer coefficient of electrode kinetics is shown to be equal to the derivative of the free energy, or generalized force, required to shift the unoccupied electronic level in the bulk. The transfer coefficient thus directly quantifies the extent of nonlinear solvation of the redox couple. The current model allows the transfer coefficient to deviate from the value of 0.5 of the linear solvation models at zero electrode overpotential. The electrode current curves become asymmetric in respect to the change in the sign of the electrode overpotential.
NASA Astrophysics Data System (ADS)
Wang, J.; Shi, M.; Zheng, P.; Xue, Sh.; Peng, R.
2018-03-01
Laser-induced breakdown spectroscopy has been applied for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens Maxim. f. biserrata Shan et Yuan used in traditional Chinese medicine. Ca II 317.993 nm, Mg I 517.268 nm, and K I 769.896 nm spectral lines have been chosen to set up calibration models for the analysis using the external standard and artificial neural network methods. The linear correlation coefficients of the predicted concentrations versus the standard concentrations of six samples determined by the artificial neural network method are 0.9896, 0.9945, and 0.9911 for Ca, Mg, and K, respectively, which are better than for the external standard method. The artificial neural network method also gives better performance comparing with the external standard method for the average and maximum relative errors, average relative standard deviations, and most maximum relative standard deviations of the predicted concentrations of Ca, Mg, and K in the six samples. Finally, it is proved that the artificial neural network method gives better performance compared to the external standard method for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens.
The regionalization of national-scale SPARROW models for stream nutrients
Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.
Daily magnesium intake and serum magnesium concentration among Japanese people.
Akizawa, Yoriko; Koizumi, Sadayuki; Itokawa, Yoshinori; Ojima, Toshiyuki; Nakamura, Yosikazu; Tamura, Tarou; Kusaka, Yukinori
2008-01-01
The vitamins and minerals that are deficient in the daily diet of a normal adult remain unknown. To answer this question, we conducted a population survey focusing on the relationship between dietary magnesium intake and serum magnesium level. The subjects were 62 individuals from Fukui Prefecture who participated in the 1998 National Nutrition Survey. The survey investigated the physical status, nutritional status, and dietary data of the subjects. Holidays and special occasions were avoided, and a day when people are most likely to be on an ordinary diet was selected as the survey date. The mean (+/-standard deviation) daily magnesium intake was 322 (+/-132), 323 (+/-163), and 322 (+/-147) mg/day for men, women, and the entire group, respectively. The mean (+/-standard deviation) serum magnesium concentration was 20.69 (+/-2.83), 20.69 (+/-2.88), and 20.69 (+/-2.83) ppm for men, women, and the entire group, respectively. The distribution of serum magnesium concentration was normal. Dietary magnesium intake showed a log-normal distribution, which was then transformed by logarithmic conversion for examining the regression coefficients. The slope of the regression line between the serum magnesium concentration (Y ppm) and daily magnesium intake (X mg) was determined using the formula Y = 4.93 (log(10)X) + 8.49. The coefficient of correlation (r) was 0.29. A regression line (Y = 14.65X + 19.31) was observed between the daily intake of magnesium (Y mg) and serum magnesium concentration (X ppm). The coefficient of correlation was 0.28. The daily magnesium intake correlated with serum magnesium concentration, and a linear regression model between them was proposed.
Establishment of analysis method for methane detection by gas chromatography
NASA Astrophysics Data System (ADS)
Liu, Xinyuan; Yang, Jie; Ye, Tianyi; Han, Zeyu
2018-02-01
The study focused on the establishment of analysis method for methane determination by gas chromatography. Methane was detected by hydrogen flame ionization detector, and the quantitative relationship was determined by working curve of y=2041.2x+2187 with correlation coefficient of 0.9979. The relative standard deviation of 2.60-6.33% and the recovery rate of 96.36%∼105.89% were obtained during the parallel determination of standard gas. This method was not quite suitable for biogas content analysis because methane content in biogas would be over the measurement range in this method.
Preventing Fatique in Women with Breast Cancer Treated with Chemotherapy.
1997-10-01
Group, studied a sample of cancer patients for whom alprazolam (Xanax®) was administered over a ten day period42 to reduce depression (measured by the...a internal consistency coefficient of 0.78. Seventy-one patients receiving the drug alprazolam were originally reported to have a significant decrease...et al. 42 found a smaller change of approximately one- half a standard deviation in SCL-90 depression scores due to alprazolam intervention. To have
Vasudevamurthy, G.; Byun, T. S.; Pappano, Pete; ...
2015-03-13
Here we present a comparison of the measured baseline mechanical and physical properties of with grain (WG) and against grain (AG) non-ASTM size NBG-18 graphite. The objectives of the experiments were twofold: (1) assess the variation in properties with grain orientation; (2) establish a correlation between specimen tensile strength and size. The tensile strength of the smallest sized (4 mm diameter) specimens were about 5% higher than the standard specimens (12 mm diameter) but still within one standard deviation of the ASTM specimen size indicating no significant dependence of strength on specimen size. The thermal expansion coefficient and elastic constantsmore » did not show significant dependence on specimen size. Lastly, experimental data indicated that the variation of thermal expansion coefficient and elastic constants were still within 5% between the different grain orientations, confirming the isotropic nature of NBG-18 graphite in physical properties.« less
Use of airborne and terrestrial lidar to detect ground displacement hazards to water systems
Stewart, J.P.; Hu, Jiawen; Kayen, R.E.; Lembo, A.J.; Collins, B.D.; Davis, C.A.; O'Rourke, T. D.
2009-01-01
We investigate the use of multiepoch airborne and terrestrial lidar to detect and measure ground displacements of sufficient magnitude to damage buried pipelines and other water system facilities that might result, for example, from earthquake or rainfall-induced landslides. Lidar scans are performed at three sites with coincident measurements by total station surveying. Relative horizontal accuracy is evaluated by measurements of lateral dimensions of well defined objects such as buildings and tanks; we find misfits ranging from approximately 5 to 12 cm, which is consistent with previous work. The bias and dispersion of lidar elevation measurements, relative to total station surveying, is assessed at two sites: (1) a power plant site (PP2) with vegetated steeply sloping terrain; and (2) a relatively flat and unvegetated site before and after trenching operations were performed. At PP2, airborne lidar showed minimal elevation bias and a standard deviation of approximately 70 cm, whereas terrestrial lidar did not produce useful results due to beam divergence issues and inadequate sampling of the study region. At the trench site, airborne lidar showed minimal elevation bias and reduced standard deviation relative to PP2 (6-20 cm), whereas terrestrial lidar was nearly unbiased with very low dispersion (4-6 cm). Pre- and posttrench bias-adjusted normalized residuals showed minimal to negligible correlation, but elevation change was affected by relative bias between epochs. The mean of elevation change bias essentially matches the difference in means of pre- and posttrench elevation bias, whereas elevation change standard deviation is sensitive to the dispersion of individual epoch elevations and their correlation coefficient. The observed lidar bias and standard deviations enable reliable detection of damaging ground displacements for some pipelines types (e.g., welded steel) but not all (e.g., concrete with unwelded, mortared joints). ?? ASCE 2009.
SU-F-BRA-08: An Investigation of Well-Chamber Responses for An Electronic Brachytherapy Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Culberson, W; Micka, J
Purpose: The aim of this study was to investigate the variation of well-type ionization chamber response between a Xoft Axxent™ electronic brachytherapy (EBT) source and a GE Oncoseed™ 6711 I-125 seed. Methods: A new EBT air-kerma standard has recently been introduced by the National Institute of Standards and Technology (NIST). Historically, the Axxent source strength has been based on a well chamber calibration from an I-125 brachytherapy source due to the lack of a primary standard. Xoft utilizes a calibration procedure that employs a GE 6711 seed calibration as a surrogate standard to represent the air-kerma strength of an Axxentmore » source. This method is based on the premise that the energies of the two sources are similar and thus, a conversion factor would be a suitable interim solution until a NIST standard was established. For this investigation, a number of well chambers of the same model type and three different EBT sources were used to determine NIST-traceable calibration coefficients for both the GE 6711 seed and the Axxent source. The ratio of the two coefficients was analyzed for consistency and also to identify any possible correlations with chamber vintage or the sources themselves. Results: For all well chambers studied, the relative standard deviation of the ratio of calibration coefficients between the two standards is less than 1%. No specific trends were found with the well chamber vintage or between the three different EBT sources used. Conclusion: The variation of well chamber calibration coefficients between a Xoft Axxent™ EBT source versus a GE 6711 Oncoseed™ are consistent across well chamber vintage and between sources. The results of this investigation confirm the underlying assumptions and stability of the surrogate standard currently in use by Xoft, and establishes a migration path for future implementation of the new NIST air kerma standard. This research is supported in part by Xoft, a subsidiary of iCAD.« less
Statistics of biospeckles with application to diagnostics of periodontitis
NASA Astrophysics Data System (ADS)
Starukhin, Pavel Y.; Kharish, Natalia A.; Sedykh, Alexey V.; Ulyanov, Sergey S.; Lepilin, Alexander V.; Tuchin, Valery V.
1999-04-01
Results of Monte-Carlo simulations Doppler shift are presented for the model of random medium that contain moving particles. The single-layered and two-layered configurations of the medium are considered. Doppler shift of the frequency of laser light is investigated as a function of such parameters as absorption coefficient, scattering coefficient, and thickness of the medium. Possibility of application of speckle interferometry for diagnostics in dentistry has been analyzed. Problem of standardization of the measuring procedure has been studied. Deviation of output characteristics of Doppler system for blood microcirculation measurements has been investigated. Dependence of form of Doppler spectrum on the number of speckles, integration by aperture, has been studied in experiments in vivo.
Petsch, Harold E.
1979-01-01
Statistical summaries of daily streamflow data for 189 stations west of the Continental Divide in Colorado are presented in this report. Duration tables, high-flow sequence tables, and low-flow sequence tables provide information about daily mean discharge. The mean, variance, standard deviation, skewness, and coefficient of variation are provided for monthly and annual flows. Percentages of average flow are provided for monthly flows and first-order serial-correlation coefficients are provided for annual flows. The text explain the nature and derivation of the data and illustrates applications of the tabulated information by examples. The data may be used by agencies and individuals engaged in water studies. (USGS)
Pressure Measurement Studies on a 1:1.5:7 Rectangular High Rise Building Model under Uniform Flow
NASA Astrophysics Data System (ADS)
Sarath Kumar, H.; Vijaya Bhaskar Reddy, P.
2017-08-01
This paper presents the experimental results of evaluate wind pressure distributions on all four faces of a rectangular tall building with 1:1.5:7 ratio. The model is made up of acrylic sheet with a geometric scale of 1:300 with plan dimension of 10 cm x 15 cm and height of 70 cm. The model is tested using a Boundary Layer Wind Tunnel (BLWT) twelve angles (0°, 5°, 10°, 15°, 25°, 33.5°, 45°, 56.5°, 60°, 75°, 87.5° & 90°) of wind incidence under uniform flow condition. Mean and standard deviation of pressure coefficients, drag & lift coefficients along wind direction and perpendicular to wind direction, mean moment coefficient are calculated from pressure measurement on the model.
NASA Astrophysics Data System (ADS)
Slaski, G.; Ohde, B.
2016-09-01
The article presents the results of a statistical dispersion analysis of an energy and power demand for tractive purposes of a battery electric vehicle. The authors compare data distribution for different values of an average speed in two approaches, namely a short and long period of observation. The short period of observation (generally around several hundred meters) results from a previously proposed macroscopic energy consumption model based on an average speed per road section. This approach yielded high values of standard deviation and coefficient of variation (the ratio between standard deviation and the mean) around 0.7-1.2. The long period of observation (about several kilometers long) is similar in length to standardized speed cycles used in testing a vehicle energy consumption and available range. The data were analysed to determine the impact of observation length on the energy and power demand variation. The analysis was based on a simulation of electric power and energy consumption performed with speed profiles data recorded in Poznan agglomeration.
NASA Astrophysics Data System (ADS)
Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha
2016-06-01
Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical metrics and also visual interpretation. The generated DSM and DTM were achieved with ±1.6m standard deviation in Z (SZ) in relation to a reference DTM.
Analysis of Thermal Design of Heating Units with Meteorological Climate Peculiarities
NASA Astrophysics Data System (ADS)
Seminenko, A. S.; Elistratova, Y. V.; Pererva, M. I.; Moiseev, M. V.
2018-03-01
This article is devoted to the analysis of thermal design of heating units, one of the compulsory calculations of heating systems, which ensures their stable and efficient operation. The article analyses the option of a single-pipe heating system with shifted end-capping areas and the overhead supply main; the difference is shown in the calculation results between heat balance equation of the heating unit and calculation of the actual heat flux (heat transfer coefficient) taking into account deviation from the standardized (technical passport) operating conditions. The calculation of the thermal conditions of residential premises is given, the deviation of the internal air temperature is shown taking into account the discrepancy between the calculation results for thermal energy.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
NASA Astrophysics Data System (ADS)
Lim, Jeong Sik; Park, Miyeon; Lee, Jinbok; Lee, Jeongsoon
2017-12-01
The effect of background gas composition on the measurement of CO2 levels was investigated by wavelength-scanned cavity ring-down spectrometry (WS-CRDS) employing a spectral line centered at the R(1) of the (3 00 1)III ← (0 0 0) band. For this purpose, eight cylinders with various gas compositions were gravimetrically and volumetrically prepared within 2σ = 0.1 %, and these gas mixtures were introduced into the WS-CRDS analyzer calibrated against standards of ambient air composition. Depending on the gas composition, deviations between CRDS-determined and gravimetrically (or volumetrically) assigned CO2 concentrations ranged from -9.77 to 5.36 µmol mol-1, e.g., excess N2 exhibited a negative deviation, whereas excess Ar showed a positive one. The total pressure broadening coefficients (TPBCs) obtained from the composition of N2, O2, and Ar thoroughly corrected the deviations up to -0.5 to 0.6 µmol mol-1, while these values were -0.43 to 1.43 µmol mol-1 considering PBCs induced by only N2. The use of TPBC enhanced deviations to be corrected to ˜ 0.15 %. Furthermore, the above correction linearly shifted CRDS responses for a large extent of TPBCs ranging from 0.065 to 0.081 cm-1 atm-1. Thus, accurate measurements using optical intensity-based techniques such as WS-CRDS require TPBC-based instrument calibration or use standards prepared in the same background composition of ambient air.
Wind speed statistics for Goldstone, California, anemometer sites
NASA Technical Reports Server (NTRS)
Berg, M.; Levy, R.; Mcginness, H.; Strain, D.
1981-01-01
An exploratory wind survey at an antenna complex was summarized statistically for application to future windmill designs. Data were collected at six locations from a total of 10 anemometers. Statistics include means, standard deviations, cubes, pattern factors, correlation coefficients, and exponents for power law profile of wind speed. Curves presented include: mean monthly wind speeds, moving averages, and diurnal variation patterns. It is concluded that three of the locations have sufficiently strong winds to justify consideration for windmill sites.
NASA Astrophysics Data System (ADS)
Lily; Laila, L.; Prasetyo, B. E.
2018-03-01
A selective, reproducibility, effective, sensitive, simple and fast High-Performance Liquid Chromatography (HPLC) was developed, optimized and validated to analyze 25-Desacetyl Rifampicin (25-DR) in human urine which is from tuberculosis patient. The separation was performed by HPLC Agilent Technologies with column Agilent Eclipse XDB- Ci8 and amobile phase of 65:35 v/v methanol: 0.01 M sodium phosphate buffer pH 5.2, at 254 nm and flow rate of 0.8ml/min. The mean retention time was 3.016minutes. The method was linear from 2–10μg/ml 25-DR with a correlation coefficient of 0.9978. Standard deviation, relative standard deviation and coefficient variation of 2, 6, 10μg/ml 25-DR were 0-0.0829, 03.1752, 0-0.0317%, respectively. The recovery of 5, 7, 9μg/ml25-DR was 80.8661, 91.3480 and 111.1457%, respectively. Limits of detection (LoD) and quantification (LoQ) were 0.51 and 1.7μg/ml, respectively. The method has fulfilled the validity guidelines of the International Conference on Harmonization (ICH) bioanalytical method which includes parameters of specificity, linearity, precision, accuracy, LoD, and LoQ. The developed method is suitable for pharmacokinetic analysis of various concentrations of 25-DR in human urine.
Assessment of Uncertainties Related to Seismic Hazard Using Fuzzy Analysis
NASA Astrophysics Data System (ADS)
Jorjiashvili, N.; Yokoi, T.; Javakhishvili, Z.
2013-05-01
Seismic hazard analysis in last few decades has been become very important issue. Recently, new technologies and available data have been improved that helped many scientists to understand where and why earthquakes happen, physics of earthquakes, etc. They have begun to understand the role of uncertainty in Seismic hazard analysis. However, there is still significant problem how to handle existing uncertainty. The same lack of information causes difficulties to quantify uncertainty accurately. Usually attenuation curves are obtained in statistical way: regression analysis. Statistical and probabilistic analysis show overlapped results for the site coefficients. This overlapping takes place not only at the border between two neighboring classes, but also among more than three classes. Although the analysis starts from classifying sites using the geological terms, these site coefficients are not classified at all. In the present study, this problem is solved using Fuzzy set theory. Using membership functions the ambiguities at the border between neighboring classes can be avoided. Fuzzy set theory is performed for southern California by conventional way. In this study standard deviations that show variations between each site class obtained by Fuzzy set theory and classical way are compared. Results on this analysis show that when we have insufficient data for hazard assessment site classification based on Fuzzy set theory shows values of standard deviations less than obtained by classical way which is direct proof of less uncertainty.
Refractive indices at visible wavelengths of soot emitted from buoyant turbulent diffusion flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, J.S.; Krishnan, S.K.; Faeth, G.M.
1996-11-01
Measurements of the optical properties of soot, emphasizing refractive indices, are reported for visible wavelengths. The experiments considered soot in the fuel-lean (overfire) region of buoyant turbulent diffusion flames in the long residence time regime where soot properties are independent of position in the overfire region and residence time. Flames fueled with acetylene, propylene, ethylene and propane burning in still air provided a range of soot physical and structure properties. Measurements included soot composition, density, structure, gravimetric volume fraction, scattering properties and absorption properties. These data were analyzed to find soot fractal dimensions, refractive indices and dimensionless extinction coefficients, assumingmore » Rayleigh-Debye-Gans scattering for polydisperse mass fractal aggregates (RDG-PFA theory). RDG-PFA theory was successfully evaluated, based on measured scattering patterns. Soot fractal dimensions were independent of both fuel type and wavelength, yielding a mean value of 1.77 with a standard deviation of 0.04. Refractive indices were independent of fuel type within experimental uncertainties and were in reasonably good agreement with earlier measurements for soot in the fuel-lean region of diffusion flames due to Dalzell and Sarofim (1969). Dimensionless extinction coefficients were independent of both fuel type and wavelength, yielding a mean value of 5.1 with a standard deviation of 0.5, which is lower than earlier measurements for reasons that still must be explained.« less
Zhang, Pei-Feng; Hu, Yuan-Man; Xiong, Zai-Ping; Liu, Miao
2011-02-01
Based on the 1:10000 aerial photo in 1997 and the three QuickBird images in 2002, 2005, and 2008, and by using Barista software and GIS and RS techniques, the three-dimensional information of the residential community in Tiexi District of Shenyang was extracted, and the variation pattern of the three-dimensional landscape in the district during its reconstruction in 1997-2008 and related affecting factors were analyzed with the indices, ie. road density, greening rate, average building height, building height standard deviation, building coverage rate, floor area rate, building shape coefficient, population density, and per capita GDP. The results showed that in 1997-2008, the building area for industry decreased, that for commerce and other public affairs increased, and the area for residents, education, and medical cares basically remained stable. The building number, building coverage rate, and building shape coefficient decreased, while the floor area rate, average building height, height standard deviation, road density, and greening rate increased. Within the limited space of residential community, the containing capacity of population and economic activity increased, and the environment quality also improved to some extent. The variation degree of average building height increased, but the building energy consumption decreased. Population growth and economic development had positive correlations with floor area rate, road density, and greening rate, but negative correlation with building coverage rate.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
NASA Astrophysics Data System (ADS)
Wu, Zhisheng; Tao, Ou; Cheng, Wei; Yu, Lu; Shi, Xinyuan; Qiao, Yanjiang
2012-02-01
This study demonstrated that near-infrared chemical imaging (NIR-CI) was a promising technology for visualizing the spatial distribution and homogeneity of Compound Liquorice Tablets. The starch distribution (indirectly, plant extraction) could be spatially determined using basic analysis of correlation between analytes (BACRA) method. The correlation coefficients between starch spectrum and spectrum of each sample were greater than 0.95. Depending on the accurate determination of starch distribution, a method to determine homogeneous distribution was proposed by histogram graph. The result demonstrated that starch distribution in sample 3 was relatively heterogeneous according to four statistical parameters. Furthermore, the agglomerates domain in each tablet was detected using score image layers of principal component analysis (PCA) method. Finally, a novel method named Standard Deviation of Macropixel Texture (SDMT) was introduced to detect agglomerates and heterogeneity based on binary image. Every binary image was divided into different sizes length of macropixel and the number of zero values in each macropixel was counted to calculate standard deviation. Additionally, a curve fitting graph was plotted on the relationship between standard deviation and the size length of macropixel. The result demonstrated the inter-tablet heterogeneity of both starch and total compounds distribution, simultaneously, the similarity of starch distribution and the inconsistency of total compounds distribution among intra-tablet were signified according to the value of slope and intercept parameters in the curve.
Variability in Wechsler Adult Intelligence Scale-IV subtest performance across age.
Wisdom, Nick M; Mignogna, Joseph; Collins, Robert L
2012-06-01
Normal Wechsler Adult Intelligence Scale (WAIS)-IV performance relative to average normative scores alone can be an oversimplification as this fails to recognize disparate subtest heterogeneity that occurs with increasing age. The purpose of the present study is to characterize the patterns of raw score change and associated variability on WAIS-IV subtests across age groupings. Raw WAIS-IV subtest means and standard deviations for each age group were tabulated from the WAIS-IV normative manual along with the coefficient of variation (CV), a measure of score dispersion calculated by dividing the standard deviation by the mean and multiplying by 100. The CV further informs the magnitude of variability represented by each standard deviation. Raw mean scores predictably decreased across age groups. Increased variability was noted in Perceptual Reasoning and Processing Speed Index subtests, as Block Design, Matrix Reasoning, Picture Completion, Symbol Search, and Coding had CV percentage increases ranging from 56% to 98%. In contrast, Working Memory and Verbal Comprehension subtests were more homogeneous with Digit Span, Comprehension, Information, and Similarities percentage of the mean increases ranging from 32% to 43%. Little change in the CV was noted on Cancellation, Arithmetic, Letter/Number Sequencing, Figure Weights, Visual Puzzles, and Vocabulary subtests (<14%). A thorough understanding of age-related subtest variability will help to identify test limitations as well as further our understanding of cognitive domains which remain relatively steady versus those which steadily decline.
Health status convergence at the local level: empirical evidence from Austria
2011-01-01
Introduction Health is an important dimension of welfare comparisons across individuals, regions and states. Particularly from a long-term perspective, within-country convergence of the health status has rarely been investigated by applying methods well established in other scientific fields. In the following paper we study the relation between initial levels of the health status and its improvement at the local community level in Austria in the time period 1969-2004. Methods We use age standardized mortality rates from 2381 Austrian communities as an indicator for the health status and analyze the convergence/divergence of overall mortality for (i) the whole population, (ii) females, (iii) males and (iv) the gender mortality gap. Convergence/Divergence is studied by applying different concepts of cross-regional inequality (weighted standard deviation, coefficient of variation, Theil-Coefficient of inequality). Various econometric techniques (weighted OLS, Quantile Regression, Kendall's Rank Concordance) are used to test for absolute and conditional beta-convergence in mortality. Results Regarding sigma-convergence, we find rather mixed results. While the weighted standard deviation indicates an increase in equality for all four variables, the picture appears less clear when correcting for the decreasing mean in the distribution. However, we find highly significant coefficients for absolute and conditional beta-convergence between the periods. While these results are confirmed by several robustness tests, we also find evidence for the existence of convergence clubs. Conclusions The highly significant beta-convergence across communities might be caused by (i) the efforts to harmonize and centralize the health policy at the federal level in Austria since the 1970s, (ii) the diminishing returns of the input factors in the health production function, which might lead to convergence, as the general conditions (e.g. income, education etc.) improve over time, and (iii) the mobility of people across regions, as people tend to move to regions/communities which exhibit more favorable living conditions. JEL classification: I10, I12, I18 PMID:21864364
Intraobserver reliability of contact pachymetry in children.
Weise, Katherine K; Kaminski, Brett; Melia, Michele; Repka, Michael X; Bradfield, Yasmin S; Davitt, Bradley V; Johnson, David A; Kraker, Raymond T; Manny, Ruth E; Matta, Noelle S; Schloff, Susan
2013-04-01
Central corneal thickness (CCT) is an important measurement in the treatment and management of pediatric glaucoma and potentially of refractive error, but data regarding reliability of CCT measurement in children are limited. The purpose of this study was to evaluate the reliability of CCT measurement with the use of handheld contact pachymetry in children. We conducted a multicenter intraobserver test-retest reliability study of more than 3,400 healthy eyes in children aged from newborn to 17 years by using a handheld contact pachymeter (Pachmate DGH55; DGH Technology Inc, Exton, PA) in 2 clinical settings--with the use of topical anesthesia in the office and with the patient under general anesthesia in a surgical facility. The overall standard error of measurement, including only measurements with standard deviation ≤5 μm, was 8 μm; the corresponding coefficient of repeatability, or limits within which 95% of test-retest differences fell, was ±22.3 μm. However, standard error of measurement increased as CCT increased, from 6.8 μm for CCT less than 525 μm, to 12.9 μm for CCT 625 μm and greater. The standard error of measurement including measurements with standard deviation >5 μm was 10.5 μm. Age, sex, race/ethnicity group, and examination setting did not influence the magnitude of test-retest differences. CCT measurement reliability in children via the Pachmate DGH55 handheld contact pachymeter is similar to that reported for adults. Because thicker CCT measurements are less reliable than thinner measurements, a second measure may be helpful when the first exceeds 575 μm. Reliability is also improved by disregarding measurements with instrument-reported standard deviations >5 μm. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.
Marchetti, Bárbara V; Candotti, Cláudia T; Raupp, Eduardo G; Oliveira, Eduardo B C; Furlanetto, Tássia S; Loss, Jefferson F
The purpose of this study was to assess a radiographic method for spinal curvature evaluation in children, based on spinous processes, and identify its normality limits. The sample consisted of 90 radiographic examinations of the spines of children in the sagittal plane. Thoracic and lumbar curvatures were evaluated using angular (apex angle [AA]) and linear (sagittal arrow [SA]) measurements based on the spinous processes. The same curvatures were also evaluated using the Cobb angle (CA) method, which is considered the gold standard. For concurrent validity (AA vs CA), Pearson's product-moment correlation coefficient, root-mean-square error, Pitman- Morgan test, and Bland-Altman analysis were used. For reproducibility (AA, SA, and CA), the intraclass correlation coefficient, standard error of measurement, and minimal detectable change measurements were used. A significant correlation was found between CA and AA measurements, as was a low root-mean-square error. The mean difference between the measurements was 0° for thoracic and lumbar curvatures, and the mean standard deviations of the differences were ±5.9° and 6.9°, respectively. The intraclass correlation coefficients of AA and SA were similar to or higher than the gold standard (CA). The standard error of measurement and minimal detectable change of the AA were always lower than the CA. This study determined the concurrent validity, as well as intra- and interrater reproducibility, of the radiographic measurements of kyphosis and lordosis in children. Copyright © 2017. Published by Elsevier Inc.
Stoliker, Deborah L.; Liu, Chongxuan; Kent, Douglas B.; Zachara, John M.
2013-01-01
Rates of U(VI) release from individual dry-sieved size fractions of a field-aggregated, field-contaminated composite sediment from the seasonally saturated lower vadose zone of the Hanford 300-Area were examined in flow-through reactors to maintain quasi-constant chemical conditions. The principal source of variability in equilibrium U(VI) adsorption properties of the various size fractions was the impact of variable chemistry on adsorption. This source of variability was represented using surface complexation models (SCMs) with different stoichiometric coefficients with respect to hydrogen ion and carbonate concentrations for the different size fractions. A reactive transport model incorporating equilibrium expressions for cation exchange and calcite dissolution, along with rate expressions for aerobic respiration and silica dissolution, described the temporal evolution of solute concentrations observed during the flow-through reactor experiments. Kinetic U(VI) desorption was well described using a multirate SCM with an assumed lognormal distribution for the mass-transfer rate coefficients. The estimated mean and standard deviation of the rate coefficients were the same for all <2 mm size fractions but differed for the 2–8 mm size fraction. Micropore volumes, assessed using t-plots to analyze N2 desorption data, were also the same for all dry-sieved <2 mm size fractions, indicating a link between micropore volumes and mass-transfer rate properties. Pore volumes for dry-sieved size fractions exceeded values for the corresponding wet-sieved fractions. We hypothesize that repeated field wetting and drying cycles lead to the formation of aggregates and/or coatings containing (micro)pore networks which provided an additional mass-transfer resistance over that associated with individual particles. The 2–8 mm fraction exhibited a larger average and standard deviation in the distribution of mass-transfer rate coefficients, possibly caused by the abundance of microporous basaltic rock fragments.
Bower, W F; Vlantis, A C; Chung, T M L; Cheung, S K C; Bjordal, K; van Hasselt, C A
2009-07-01
High convergent and discriminant validity between subscales was achieved after the translation of EORTC QLQ-H&N35 into Cantonese. Most subscales were assessing distinct components of quality of life (QoL). The study aimed to translate the EORTC QLQ-H&N35 cancer module into Cantonese and to confirm validity and reliability for use in a Hong Kong head and neck (H&N) cancer population. An ethnocentric forward-backward translation of EORTC QLQ-H&N35 was conducted by bilingual head and neck health professionals. Discrepancies were identified and problematic wording and concepts revised. Further review preceded pilot testing in 119 postoperative H&N cancer patients. Internal consistency within each subscale, convergent and discriminant validity to check the item relevance and item representativeness within and between subscales were examined. Mean and standard deviations of each subscale and single item and Cronbach's alpha coefficients for subscales were calculated. Six of seven subscales achieved standard reliability (Cronbach's alpha coefficient >0.7). Correlation coefficients between an item and its own subscale were significantly higher than the coefficients with other subscales. Scaling success was found in all subscales. Pearson's correlation coefficient between subscales was <0.70, except between the subscales swallowing and trouble with social eating (r = 0.795), and speech problems and social contact (r = 0.754).
Altuntepe, Emrah; Emel'yanenko, Vladimir N; Forster-Rotgers, Maximilian; Sadowski, Gabriele; Verevkin, Sergey P; Held, Christoph
2017-10-01
Levulinic acid was esterified with methanol, ethanol, and 1-butanol with the final goal to predict the maximum yield of these equilibrium-limited reactions as function of medium composition. In a first step, standard reaction data (standard Gibbs energy of reaction Δ R g 0 ) were determined from experimental formation properties. Unexpectedly, these Δ R g 0 values strongly deviated from data obtained with classical group contribution methods that are typically used if experimental standard data is not available. In a second step, reaction equilibrium concentrations obtained from esterification catalyzed by Novozym 435 at 323.15 K were measured, and the corresponding activity coefficients of the reacting agents were predicted with perturbed-chain statistical associating fluid theory (PC-SAFT). The so-obtained thermodynamic activities were used to determine Δ R g 0 at 323.15 K. These results could be used to cross-validate Δ R g 0 from experimental formation data. In a third step, reaction-equilibrium experiments showed that equilibrium position of the reactions under consideration depends strongly on the concentration of water and on the ratio of levulinic acid: alcohol in the initial reaction mixtures. The maximum yield of the esters was calculated using Δ R g 0 data from this work and activity coefficients of the reacting agents predicted with PC-SAFT for varying feed composition of the reaction mixtures. The use of the new Δ R g 0 data combined with PC-SAFT allowed good agreement to the measured yields, while predictions based on Δ R g 0 values obtained with group contribution methods showed high deviations to experimental yields.
Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1994-01-01
The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.
Daily Magnesium Intake and Serum Magnesium Concentration among Japanese People
Akizawa, Yoriko; Koizumi, Sadayuki; Itokawa, Yoshinori; Ojima, Toshiyuki; Nakamura, Yosikazu; Tamura, Tarou; Kusaka, Yukinori
2008-01-01
Background The vitamins and minerals that are deficient in the daily diet of a normal adult remain unknown. To answer this question, we conducted a population survey focusing on the relationship between dietary magnesium intake and serum magnesium level. Methods The subjects were 62 individuals from Fukui Prefecture who participated in the 1998 National Nutrition Survey. The survey investigated the physical status, nutritional status, and dietary data of the subjects. Holidays and special occasions were avoided, and a day when people are most likely to be on an ordinary diet was selected as the survey date. Results The mean (±standard deviation) daily magnesium intake was 322 (±132), 323 (±163), and 322 (±147) mg/day for men, women, and the entire group, respectively. The mean (±standard deviation) serum magnesium concentration was 20.69 (±2.83), 20.69 (±2.88), and 20.69 (±2.83) ppm for men, women, and the entire group, respectively. The distribution of serum magnesium concentration was normal. Dietary magnesium intake showed a log-normal distribution, which was then transformed by logarithmic conversion for examining the regression coefficients. The slope of the regression line between the serum magnesium concentration (Y ppm) and daily magnesium intake (X mg) was determined using the formula Y = 4.93 (log10X) + 8.49. The coefficient of correlation (r) was 0.29. A regression line (Y = 14.65X + 19.31) was observed between the daily intake of magnesium (Y mg) and serum magnesium concentration (X ppm). The coefficient of correlation was 0.28. Conclusion The daily magnesium intake correlated with serum magnesium concentration, and a linear regression model between them was proposed. PMID:18635902
NASA Astrophysics Data System (ADS)
Peng, Zhenyang; Tian, Fuqiang; Wu, Jingwei; Huang, Jiesheng; Hu, Hongchang; Darnault, Christophe J. G.
2016-09-01
A one-dimensional numerical model of heat and water transport in freezing soils is developed by assuming that ice-water interfaces are not necessarily in equilibrium. The Clapeyron equation, which is derived from a static ice-water interface using the thermal equilibrium theory, cannot be readily applied to a dynamic system, such as freezing soils. Therefore, we handled the redistribution of liquid water with the Richard's equation. In this application, the sink term is replaced by the freezing rate of pore water, which is proportional to the extent of supercooling and available water content for freezing by a coefficient, β. Three short-term laboratory column simulations show reasonable agreement with observations, with standard error of simulation on water content ranging between 0.007 and 0.011 cm3 cm-3, showing improved accuracy over other models that assume equilibrium ice-water interfaces. Simulation results suggest that when the freezing front is fixed at a specific depth, deviation of the ice-water interface from equilibrium, at this location, will increase with time. However, this deviation tends to weaken when the freezing front slowly penetrates to a greater depth, accompanied with thinner soils of significant deviation. The coefficient, β, plays an important role in the simulation of heat and water transport. A smaller β results in a larger deviation in the ice-water interface from equilibrium, and backward estimation of the freezing front. It also leads to an underestimation of water content in soils that were previously frozen by a rapid freezing rate, and an overestimation of water content in the rest of the soils.
Self-broadened widths and shifts of 12C 16O 2: 4750-7000 cm -1
NASA Astrophysics Data System (ADS)
Toth, R. A.; Brown, L. R.; Miller, C. E.; Devi, V. Malathy; Benner, D. Chris
2006-10-01
In the previous paper, we report line strength measurements for 58 bands of 12CO 2 between 4550 and 7000 cm -1 [R.A. Toth, L.R. Brown, C.E. Miller, V. Malathy Devi, D. Chris Benner, J. Mol. Spectrosc., this issue, doi:10.1016/j.jms.2006.008.001.]. In the present study, self-broadenedwidth and self-induced pressure shift coefficients are determined in two intervals: (a) between 4750 and 5400 cm -1for bands of the Fermi triad (20011 ← 00001, 20012 ← 00001, 20013 ← 00001), three corresponding hot bands (21111 ← 01101, 21112 ← 01101, 21113 ← 01101) and the 01121← 00001 combination band; (b) between 6100 and 7000 cm -1 for the Fermi tetrad (30014 ← 00001, 30013 ← 00001, 30012 ← 00001, 30011 ← 00001), two associated hot bands (31113 ← 01101, 31112 ← 01101), as well as 00031 ← 00001 and its hot band 01131 ← 01101. Least-squares fits of the experimental width and pressure shift coefficients are modeled using empirical expressions: b0=exp∑ia(i)x(i) for widths where x(1)=1, x(2)=m, x(3)=m2, x(4)=m, x(5)=m4, x(6)={1}/{m}, and d0=∑ia(i)x(i) for pressure shifts where x(1)=1, x(2)={1}/{m}, x(3)=m, x(4)=m2, x(5)={1}/{m2}, x(6)={1}/{m3},x(7)=m3, x(8)={m}/{m} The modeled width coefficients generally agree with the experimental values with standard deviations of less than 1%, while the standard deviations of the modeled values for the pressure-induced shift coefficients range between 2.3% and 6.7%. The largest percentage error is associated with the system of the three hot bands: 21111 ← 01101, 21112 ← 01101, and 21113 ← 01101. It is observed that transitions with the same rotational quantum numbers have slightly different widths in some of the bands. As expected, pressure-induced-shift coefficients vary as a function of the band center, but there are also subtle differences from band to band for transitions with the same rotational quanta.
Effects of vegetation canopy on the radar backscattering coefficient
NASA Technical Reports Server (NTRS)
Mo, T.; Blanchard, B. J.; Schmugge, T. J.
1983-01-01
Airborne L- and C-band scatterometer data, taken over both vegetation-covered and bare fields, were systematically analyzed and theoretically reproduced, using a recently developed model for calculating radar backscattering coefficients of rough soil surfaces. The results show that the model can reproduce the observed angular variations of radar backscattering coefficient quite well via a least-squares fit method. Best fits to the data provide estimates of the statistical properties of the surface roughness, which is characterized by two parameters: the standard deviation of surface height, and the surface correlation length. In addition, the processes of vegetation attenuation and volume scattering require two canopy parameters, the canopy optical thickness and a volume scattering factor. Canopy parameter values for individual vegetation types, including alfalfa, milo and corn, were also determined from the best-fit results. The uncertainties in the scatterometer data were also explored.
Respirable particulate monitoring with remote sensors. (Public health ecology: Air pollution)
NASA Technical Reports Server (NTRS)
Severs, R. K.
1974-01-01
The feasibility of monitoring atmospheric aerosols in the respirable range from air or space platforms was studied. Secondary reflectance targets were located in the industrial area and near Galveston Bay. Multichannel remote sensor data were utilized to calculate the aerosol extinction coefficient and thus determine the aerosol size distribution. Houston Texas air sampling network high volume data were utilized to generate computer isopleth maps of suspended particulates and to establish the mass loading of the atmosphere. In addition, a five channel nephelometer and a multistage particulate air sampler were used to collect data. The extinction coefficient determined from remote sensor data proved more representative of wide areal phenomena than that calculated from on site measurements. It was also demonstrated that a significant reduction in the standard deviation of the extinction coefficient could be achieved by reducing the bandwidths used in remote sensor.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Costa, S. R. X.; Paiao, L. B. F.; Mendonca, F. J.; Shimabukuro, Y. E.; Duarte, V.
1983-01-01
The two phase sampling technique was applied to estimate the area cultivated with sugar cane in an approximately 984 sq km pilot region of Campos. Correlation between existing aerial photography and LANDSAT data was used. The two phase sampling technique corresponded to 99.6% of the results obtained by aerial photography, taken as ground truth. This estimate has a standard deviation of 225 ha, which constitutes a coefficient of variation of 0.6%.
Thermoelectric property measurements with computer controlled systems
NASA Technical Reports Server (NTRS)
Chmielewski, A. B.; Wood, C.
1984-01-01
A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.
Peer effects in risk aversion.
Balsa, Ana I; Gandelman, Néstor; González, Nicolás
2015-01-01
We estimate peer effects in risk attitudes in a sample of high school students. Relative risk aversion is elicited from surveys administered at school. Identification of peer effects is based on parents not being able to choose the class within the school of their choice, and on the use of instrumental variables conditional on school-grade fixed effects. We find a significant and quantitatively large impact of peers' risk attitudes on a male individual's coefficient of risk aversion. Specifically, a one standard deviation increase in the group's coefficient of risk aversion increases an individual's risk aversion by 43%. Our findings shed light on the origin and stability of risk attitudes and, more generally, on the determinants of economic preferences. © 2014 Society for Risk Analysis.
Petsch, Harold E.
1979-01-01
Statistical summaries of daily streamflow data for 246 stations east of the Continental Divide in Colorado and adjacent States are presented in this report. Duration tables, high-flow sequence tables, and low-flow sequence tables provide information about daily mean discharge. The mean, variance, standard deviation, skewness, and coefficient of variation are provided for monthly and annual flows. Percentages of average flow are provided for monthly flows and first-order serial-correlation coefficients are provided for annual flows. The text explains the nature and derivation of the data and illustrates applications of the tabulated information by examples. The data may be used by agencies and individuals engaged in water studies. (USGS)
Multi-technique comparison of troposphere zenith delays and gradients during CONT08
NASA Astrophysics Data System (ADS)
Teke, Kamil; Böhm, Johannes; Nilsson, Tobias; Schuh, Harald; Steigenberger, Peter; Dach, Rolf; Heinkelmann, Robert; Willis, Pascal; Haas, Rüdiger; García-Espada, Susana; Hobiger, Thomas; Ichikawa, Ryuichi; Shimizu, Shingo
2011-07-01
CONT08 was a 15 days campaign of continuous Very Long Baseline Interferometry (VLBI) sessions during the second half of August 2008 carried out by the International VLBI Service for Geodesy and Astrometry (IVS). In this study, VLBI estimates of troposphere zenith total delays (ZTD) and gradients during CONT08 were compared with those derived from observations with the Global Positioning System (GPS), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), and water vapor radiometers (WVR) co-located with the VLBI radio telescopes. Similar geophysical models were used for the analysis of the space geodetic data, whereas the parameterization for the least-squares adjustment of the space geodetic techniques was optimized for each technique. In addition to space geodetic techniques and WVR, ZTD and gradients from numerical weather models (NWM) were used from the European Centre for Medium-Range Weather Forecasts (ECMWF) (all sites), the Japan Meteorological Agency (JMA) and Cloud Resolving Storm Simulator (CReSS) (Tsukuba), and the High Resolution Limited Area Model (HIRLAM) (European sites). Biases, standard deviations, and correlation coefficients were computed between the troposphere estimates of the various techniques for all eleven CONT08 co-located sites. ZTD from space geodetic techniques generally agree at the sub-centimetre level during CONT08, and—as expected—the best agreement is found for intra-technique comparisons: between the Vienna VLBI Software and the combined IVS solutions as well as between the Center for Orbit Determination (CODE) solution and an IGS PPP time series; both intra-technique comparisons are with standard deviations of about 3-6 mm. The best inter space geodetic technique agreement of ZTD during CONT08 is found between the combined IVS and the IGS solutions with a mean standard deviation of about 6 mm over all sites, whereas the agreement with numerical weather models is between 6 and 20 mm. The standard deviations are generally larger at low latitude sites because of higher humidity, and the latter is also the reason why the standard deviations are larger at northern hemisphere stations during CONT08 in comparison to CONT02 which was observed in October 2002. The assessment of the troposphere gradients from the different techniques is not as clear because of different time intervals, different estimation properties, or different observables. However, the best inter-technique agreement is found between the IVS combined gradients and the GPS solutions with standard deviations between 0.2 and 0.7 mm.
Repeatability Modeling for Wind-Tunnel Measurements: Results for Three Langley Facilities
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Houlden, Heather P.
2014-01-01
Data from extensive check standard tests of seven measurement processes in three NASA Langley Research Center wind tunnels are statistically analyzed to test a simple model previously presented in 2000 for characterizing short-term, within-test and across-test repeatability. The analysis is intended to support process improvement and development of uncertainty models for the measurements. The analysis suggests that the repeatability can be estimated adequately as a function of only the test section dynamic pressure over a two-orders- of-magnitude dynamic pressure range. As expected for low instrument loading, short-term coefficient repeatability is determined by the resolution of the instrument alone (air off). However, as previously pointed out, for the highest dynamic pressure range the coefficient repeatability appears to be independent of dynamic pressure, thus presenting a lower floor for the standard deviation for all three time frames. The simple repeatability model is shown to be adequate for all of the cases presented and for all three time frames.
Fenske, Martin
2008-01-01
The present work describes a specific and rapid determination of cortisol in human plasma. The method includes liquid-liquid extraction of plasma samples, thin-layer chromatography (TLC) of ethanolic extracts on aluminium foil-backed silica gel 60 TLC plates, derivatization of cortisol with isonicotinic acid hydrazide, and densitometric measurement of the fluorescence intensity of cortisol hydrazone. The fluorescence was linearly related to cortisol amounts; the correlation coefficients of standard curve plots were r>0.99. The coefficient of variation ranged between 2.8-7.9% (20 ng, within-assay/between assay variation) and 1.6-6.8% (80 ng, within-assay/between assay variation). The recovery of cortisol from plasma spiked with 21-deoxycortisol was 85%+/-4%. Cortisol concentration in the plasma was 66+/-32 ng/mL (mean+/-standard deviation, n=24). The advantage of this method is its simplicity to separate cortisol from other steroids by TLC, its specificity (formation of cortisol hydrazone), and the rapid quantitation of cortisol by densitometry.
NASA Astrophysics Data System (ADS)
Chen, Qidan; Chen, Qixian; Wu, Fei; Liao, Jia; Zhao, Xi
2018-02-01
The technology of DEHP and DBP detection by high performance liquid chromatography coupled with ultraviolet detection (HPLC-UV) was developed and applied in analysis of local water sources from agriculture, industrial and residential areas. Under the optimized sample pretreatment and detection conditions, DEHP and DBP were well separated and detected in 4 mins. The detection limit of DBP was 0.002 mg/L and DEHP was 0.006 mg/L, and it meets the Chinese National Standard limitations for drinking water quality. The linear correlation coefficient of DBP and DEHP standard calibration curves was 0.9998 and 0.9995. The linear range of DBP was 0.020 mg/L ∼20.0 mg/L, with the standard deviation of 0.560% ∼5.07%, and the linear range of DEHP was 0.060 mg/L ∼15.0 mg/L, with the standard deviation of 0.546% ∼5.74%. Ten water samples from Jinwan district of Zhuhai in Guangdong province of China were analyzed. However, the PAEs amounts found in the water sources from industrial areas were higher than the agriculture and residential areas, industries grow incredibly fast in the district in recently years and more attention should be paid to the increasing risks of water sources pollution.
Predictive model for disinfection by-product in Alexandria drinking water, northern west of Egypt.
Abdullah, Ali M; Hussona, Salah El-dien
2013-10-01
Chlorine has been utilized in the early stages of water treatment processes as disinfectant. Disinfection for drinking water reduces the risk of pathogenic infection but may pose a chemical threat to human health due to disinfection residues and their by-products (DBP) when the organic and inorganic precursors are present in water. In the last two decades, many modeling attempts have been made to predict the occurrence of DBP in drinking water. Models have been developed based on data generated in laboratory-scale and field-scale investigations. The objective of this paper is to develop a predictive model for DBP formation in the Alexandria governorate located at the northern west of Egypt based on field-scale investigations as well as laboratory-controlled experimentations. The present study showed that the correlation coefficient between trihalomethanes (THM) predicted and THM measured was R (2)=0.88 and the minimum deviation percentage between THM predicted and THM measured was 0.8 %, the maximum deviation percentage was 89.3 %, and the average deviation was 17.8 %, while the correlation coefficient between dichloroacetic acid (DCAA) predicted and DCAA measured was R (2)=0.98 and the minimum deviation percentage between DCAA predicted and DCAA measured was 1.3 %, the maximum deviation percentage was 47.2 %, and the average deviation was 16.6 %. In addition, the correlation coefficient between trichloroacetic acid (TCAA) predicted and TCAA measured was R (2)=0.98 and the minimum deviation percentage between TCAA predicted and TCAA measured was 4.9 %, the maximum deviation percentage was 43.0 %, and the average deviation was 16.0 %.
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
Resistance Training Increases the Variability of Strength Test Scores
2009-06-08
standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard
Meng, Jie; Zhu, Lijing; Zhu, Li; Wang, Huanhuan; Liu, Song; Yan, Jing; Liu, Baorui; Guan, Yue; Ge, Yun; He, Jian; Zhou, Zhengyang; Yang, Xiaofeng
2016-10-22
To explore the role of apparent diffusion coefficient (ADC) histogram shape related parameters in early assessment of treatment response during the concurrent chemo-radiotherapy (CCRT) course of advanced cervical cancers. This prospective study was approved by the local ethics committee and informed consent was obtained from all patients. Thirty-two patients with advanced cervical squamous cell carcinomas underwent diffusion weighted magnetic resonance imaging (b values, 0 and 800 s/mm 2 ) before CCRT, at the end of 2nd and 4th week during CCRT and immediately after CCRT completion. Whole lesion ADC histogram analysis generated several histogram shape related parameters including skewness, kurtosis, s-sD av , width, standard deviation, as well as first-order entropy and second-order entropies. The averaged ADC histograms of 32 patients were generated to visually observe dynamic changes of the histogram shape following CCRT. All parameters except width and standard deviation showed significant changes during CCRT (all P < 0.05), and their variation trends fell into four different patterns. Skewness and kurtosis both showed high early decline rate (43.10 %, 48.29 %) at the end of 2nd week of CCRT. All entropies kept decreasing significantly since 2 weeks after CCRT initiated. The shape of averaged ADC histogram also changed obviously following CCRT. ADC histogram shape analysis held the potential in monitoring early tumor response in patients with advanced cervical cancers undergoing CCRT.
Herek, Duygu; Karabulut, Nevzat; Kocyıgıt, Ali; Yagcı, Ahmet Baki
2016-01-01
Our aim was to compare the apparent diffusion coefficient (ADC) values of normal abdominal parenchymal organs and signal-to-noise ratio (SNR) measurements in the same patients with breath hold (BH) and free breathing (FB) diffusion weighted imaging (DWI). Forty-eight patients underwent both BH and FB DWI. Spherical region of interest (ROI) was placed on the right hepatic lobe, spleen, pancreas, and renal cortices. ADC values were calculated for each organ on each sequence using an automated software. Image noise, defined as the standard deviation (SD) of the signal intensities in the most artifact-free area of the image background was measured by placing the largest possible ROI on either the left or the right side of the body outside the object in the recorded field of view. SNR was calculated using the formula: SNR=signal intensity (SI) (organ) /standard deviation (SD) (noise) . There were no statistically significant differences in ADC values of the abdominal organs between BH and FB DWI sequences ( p >0.05). There were statistically significant differences between SNR values of organs on BH and FB DWIs. SNRs were found to be better on FB DWI than BH DWI ( p <0.001). Free breathing DWI technique reduces image noise and increases SNR for abdominal examinations. Free breathing technique is therefore preferable to BH DWI in the evaluation of abdominal organs by DWI.
A meta-analysis of the validity of FFQ targeted to adolescents.
Tabacchi, Garden; Filippi, Anna Rita; Amodio, Emanuele; Jemni, Monèm; Bianco, Antonino; Firenze, Alberto; Mammina, Caterina
2016-05-01
The present work is aimed at meta-analysing validity studies of FFQ for adolescents, to investigate their overall accuracy and variables that can affect it negatively. A meta-analysis of sixteen original articles was performed within the ASSO Project (Adolescents and Surveillance System in the Obesity prevention). The articles assessed the validity of FFQ for adolescents, compared with food records or 24 h recalls, with regard to energy and nutrient intakes. Pearson's or Spearman's correlation coefficients, means/standard deviations, kappa agreement, percentiles and mean differences/limits of agreement (Bland-Altman method) were extracted. Pooled estimates were calculated and heterogeneity tested for correlation coefficients and means/standard deviations. A subgroup analysis assessed variables influencing FFQ accuracy. An overall fair/high correlation between FFQ and reference method was found; a good agreement, measured through the intake mean comparison for all nutrients except sugar, carotene and K, was observed. Kappa values showed fair/moderate agreement; an overall good ability to rank adolescents according to energy and nutrient intakes was evidenced by data of percentiles; absolute validity was not confirmed by mean differences/limits of agreement. Interviewer administration mode, consumption interval of the previous year/6 months and high number of food items are major contributors to heterogeneity and thus can reduce FFQ accuracy. The meta-analysis shows that FFQ are accurate tools for collecting data and could be used for ranking adolescents in terms of energy and nutrient intakes. It suggests how the design and the validation of a new FFQ should be addressed.
Du, Han; Wang, Lijuan
2018-04-23
Intraindividual variability can be measured by the intraindividual standard deviation ([Formula: see text]), intraindividual variance ([Formula: see text]), estimated hth-order autocorrelation coefficient ([Formula: see text]), and mean square successive difference ([Formula: see text]). Unresolved issues exist in the research on reliabilities of intraindividual variability indicators: (1) previous research only studied conditions with 0 autocorrelations in the longitudinal responses; (2) the reliabilities of [Formula: see text] and [Formula: see text] have not been studied. The current study investigates reliabilities of [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and the intraindividual mean, with autocorrelated longitudinal data. Reliability estimates of the indicators were obtained through Monte Carlo simulations. The impact of influential factors on reliabilities of the intraindividual variability indicators is summarized, and the reliabilities are compared across the indicators. Generally, all the studied indicators of intraindividual variability were more reliable with a more reliable measurement scale and more assessments. The reliabilities of [Formula: see text] were generally lower than those of [Formula: see text] and [Formula: see text], the reliabilities of [Formula: see text] were usually between those of [Formula: see text] and [Formula: see text] unless the scale reliability was large and/or the interindividual standard deviation in autocorrelation coefficients was large, and the reliabilities of the intraindividual mean were generally the highest. An R function is provided for planning longitudinal studies to ensure sufficient reliabilities of the intraindividual indicators are achieved.
The impact of electronic health record use on physician productivity.
Adler-Milstein, Julia; Huckman, Robert S
2013-11-01
To examine the impact of the degree of electronic health record (EHR) use and delegation of EHR tasks on clinician productivity in ambulatory settings. We examined EHR use in primary care practices that implemented a web-based EHR from athenahealth (n = 42) over 3 years (695 practice-month observations). Practices were predominantly small and spread throughout the country. Data came from athenahealth practice management system and EHR task logs. We developed monthly measures of EHR use and delegation to support staff from task logs. Productivity was measured using work relative value units (RVUs). Using fixed effects models, we assessed the independent impacts on productivity of EHR use and delegation. We then explored the interaction between these 2 strategies and the role of practice size. Greater EHR use and greater delegation were independently associated with higher levels of productivity. An increase in EHR use of 1 standard deviation resulted in a 5.3% increase in RVUs per clinician workday; an increase in delegation of EHR tasks of 1 standard deviation resulted in an 11.0% increase in RVUs per clinician workday (P <.05 for both). Further, EHR use and delegation had a positive joint impact on productivity in large practices (coefficient, 0.058; P <.05), but a negative joint impact on productivity in small practices (coefficient, -0.142; P <.01). Clinicians in practices that increased EHR use and delegated EHR tasks were more productive, but practice size determined whether the 2 strategies were complements or substitutes.
Quan, Hui; Zhang, Ji
2003-09-15
Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.
The Regionalization of National-Scale SPARROW Models for Stream Nutrients
Schwarz, G.E.; Alexander, R.B.; Smith, R.A.; Preston, S.D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ??100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.
Computation of turbulence and dispersion of cork in the NETL riser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiradilok, Veeraya; Gidaspow, Dimitri; Breault, R.W.
The knowledge of dispersion coefficients is essential for reliable design of gasifiers. However, a literature review had shown that dispersion coefficients in fluidized beds differ by more than five orders of magnitude. This study presents a comparison of the computed axial solids dispersion coefficients for cork particles to the NETL riser cork data. The turbulence properties, the Reynolds stresses, the granular temperature spectra and the radial and axial gas and solids dispersion coefficients are computed. The standard kinetic theory model described in Gidaspow’s 1994 book, Multiphase Flow and Fluidization, Academic Press and the IIT and Fluent codes were used tomore » compute the measured axial solids volume fraction profiles for flow of cork particles in the NETL riser. The Johnson–Jackson boundary conditions were used. Standard drag correlations were used. This study shows that the computed solids volume fractions for the low flux flow are within the experimental error of those measured, using a two-dimensional model. At higher solids fluxes the simulated solids volume fractions are close to the experimental measurements, but deviate significantly at the top of the riser. This disagreement is due to use of simplified geometry in the two-dimensional simulation. There is a good agreement between the experiment and the three-dimensional simulation for a high flux condition. This study concludes that the axial and radial gas and solids dispersion coefficients in risers operating in the turbulent flow regime can be computed using a multiphase computational fluid dynamics model.« less
Suárez, Inmaculada; Coto, Baudilio
2015-08-14
Average molecular weights and polydispersity indexes are some of the most important parameters considered in the polymer characterization. Usually, gel permeation chromatography (GPC) and multi angle light scattering (MALS) are used for this determination, but GPC values are overestimated due to the dispersion introduced by the column separation. Several procedures were proposed to correct such effect usually involving more complex calibration processes. In this work, a new method of calculation has been considered including diffusion effects. An equation for the concentration profile due to diffusion effects along the GPC column was considered to be a Fickian function and polystyrene narrow standards were used to determine effective diffusion coefficients. The molecular weight distribution function of mono and poly disperse polymers was interpreted as a sum of several Fickian functions representing a sample formed by only few kind of polymer chains with specific molecular weight and diffusion coefficient. Proposed model accurately fit the concentration profile along the whole elution time range as checked by the computed standard deviation. Molecular weights obtained by this new method are similar to those obtained by MALS or traditional GPC while polydispersity index values are intermediate between those obtained by the traditional GPC combined to Universal Calibration method and the MALS method. Values for Pearson and Lin coefficients shows improvement in the correlation of polydispersity index values determined by GPC and MALS methods when diffusion coefficients and new methods are used. Copyright © 2015 Elsevier B.V. All rights reserved.
Attenuation Coefficient Estimation of the Healthy Human Thyroid In Vivo
NASA Astrophysics Data System (ADS)
Rouyer, J.; Cueva, T.; Portal, A.; Yamamoto, T.; Lavarello, R.
Previous studies have demonstrated that attenuation coefficients can be useful towards characterizing thyroid tissues. In this work, ultrasonic attenuation coefficients were estimated from healthy human thyroids in vivo using a clinical scanner. The selected subjects were five young, healthy volunteers (age: 26 ± 6 years old, gender: three females, two males) with no reported history of thyroid diseases, no palpable thyroid nodules, no smoking habits, and body mass index less than 30 kg/m2. Echographic examinations were conducted by a trained sonographer using a SonixTouch system (Ultrasonix Medical Corporation, Richmond, BC) equipped with an L14-5 linear transducer array (nominal center frequency of 10 MHz, transducer footprint of 3.8 cm). Radiofrequency data corresponding to the collected echographic images in both transverse and longitudinal views were digitized at a sampling rate of 40 MHz and processed with Matlab codes (MathWorks, Natick, MA) to estimate attenuation coefficients using the spectral log difference method. The estimation was performed using an analysis bandwidth spanning from 4.0 to 9.0 MHz. The average value of the estimated ultrasonic attenuation coefficients was equal to 1.34 ± 0.15 dB/(cm.MHz). The standard deviation of the estimated average attenuation coefficient across different volunteers suggests a non-negligible inter-subject variability in the ultrasonic attenuation coefficient of the human thyroid.
LI, FENFANG; WILKENS, LYNNE R.; NOVOTNY, RACHEL; FIALKOWSKI, MARIE K.; PAULINO, YVETTE C.; NELSON, RANDALL; BERSAMIN, ANDREA; MARTIN, URSULA; DEENIK, JONATHAN; BOUSHEY, CAROL J.
2016-01-01
Objectives Anthropometric standardization is essential to obtain reliable and comparable data from different geographical regions. The purpose of this study is to describe anthropometric standardization procedures and findings from the Children’s Healthy Living (CHL) Program, a study on childhood obesity in 11 jurisdictions in the US-Affiliated Pacific Region, including Alaska and Hawai‘i. Methods Zerfas criteria were used to compare the measurement components (height, waist, and weight) between each trainee and a single expert anthropometrist. In addition, intra- and inter-rater technical error of measurement (TEM), coefficient of reliability, and average bias relative to the expert were computed. Results From September 2012 to December 2014, 79 trainees participated in at least 1 of 29 standardization sessions. A total of 49 trainees passed either standard or alternate Zerfas criteria and were qualified to assess all three measurements in the field. Standard Zerfas criteria were difficult to achieve: only 2 of 79 trainees passed at their first training session. Intra-rater TEM estimates for the 49 trainees compared well with the expert anthropometrist. Average biases were within acceptable limits of deviation from the expert. Coefficient of reliability was above 99% for all three anthropometric components. Conclusions Standardization based on comparison with a single expert ensured the comparability of measurements from the 49 trainees who passed the criteria. The anthropometric standardization process and protocols followed by CHL resulted in 49 standardized field anthropometrists and have helped build capacity in the health workforce in the Pacific Region. PMID:26457888
[Comparison of two methods for rapid determination of C-reactive protein with the Tina-quant].
Oremek, G M; Luksaite, R; Bretschneider, I
2008-03-01
C-reactive protein (CRP) as an acute phase protein is an important diagnostic marker for the presence and course of human processes. Out of the acute phase proteins it is one of those the concentrations increase most rapidly with its sensitivity being superior to other markers of inflammation, such as leukocytosis, erythrocytic sedimentation rate, and fever. This study compared two-point-of-care assays with the standard laboratory method Tina-quant CRP processed on a Hitachi 917: the immunofiltration assay NycoCard CRP Whole Blood and the turbidimetric immunoassay Micros CRP. Both methods are carried in the presence of a patient, by using capillary or venous blood. Seventy-eight blood samples were analyzed first in the standard laboratory routine and then by both rapid test assays. The precision of both assays was determined from the confidence interval. The results were statistically analyzed by arithmetic standard deviation mean method, variation coefficient, Spearman correlation index, Wilcoxon and Bland-Altman tests, and Passing-Bablock regression. NycoCard CRP Whole Blood showed a correlation coefficient of R = 0.9838; the precision had a coefficient of variation of CV = 1.8759% while As compared with Tina-quant CRP had R = 0.9934 and CV = 0.9160%. Both assays indicated the same results as Tina-quant CRP. Both Tina-quant CRP and NycoCard CRP Whole Blood give the best fit for the rapid determination of CRP.
Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Bobko, Philip
2018-06-01
Meta-analysis has become a well-accepted method for synthesizing empirical research about a given phenomenon. Many meta-analyses focus on synthesizing correlations across primary studies, but some primary studies do not report correlations. Peterson and Brown (2005) suggested that researchers could use standardized regression weights (i.e., beta coefficients) to impute missing correlations. Indeed, their beta estimation procedures (BEPs) have been used in meta-analyses in a wide variety of fields. In this study, the authors evaluated the accuracy of BEPs in meta-analysis. We first examined how use of BEPs might affect results from a published meta-analysis. We then developed a series of Monte Carlo simulations that systematically compared the use of existing correlations (that were not missing) to data sets that incorporated BEPs (that impute missing correlations from corresponding beta coefficients). These simulations estimated ρ̄ (mean population correlation) and SDρ (true standard deviation) across a variety of meta-analytic conditions. Results from both the existing meta-analysis and the Monte Carlo simulations revealed that BEPs were associated with potentially large biases when estimating ρ̄ and even larger biases when estimating SDρ. Using only existing correlations often substantially outperformed use of BEPs and virtually never performed worse than BEPs. Overall, the authors urge a return to the standard practice of using only existing correlations in meta-analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Parametric Blade Study Test Report Rotor Configuration. Number 2
1988-11-01
Incidence Angle (100% N) .............. 51 9 Rotor Relative Inlet Mach Number (100% N) ... 51 1G Rotor Loss Coefficient (100% N) ............. 52 11 Rotor...Diffusion Factor (100% N) ............. 52 12 Rotor Deviation Angle (100% N) .............. 53 13 Stator Incidence Angle (100% N) ............. 53 14...78 50 Stator Deviation Angle (90% N) .............. 79 51 Stator Loss Coefficient (90% N) ............. 79 52 Static Pressure Distribution
Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu
2014-12-30
Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.
Effect of Free Stream Turbulence on the Performance of a Marine Hydrokinetic Turbine
NASA Astrophysics Data System (ADS)
Vinod, Ashwin; Banerjee, Arindam
2015-11-01
The effects of controlled and elevated levels of free stream turbulence on the performance characteristics of a three bladed, constant chord, untwisted marine hydrokinetic turbine is tested experimentally. Controlled homogeneous free stream turbulence levels ranging from 3% to ~20% are achieved by employing an active grid turbulence generator that is placed at the entrance of the water channel test section and is equipped with motor controlled winglet shafts. In addition to free stream turbulence, various (turbine) operating conditions such as the free stream velocity and rotational speed are varied. A comparison of performance characteristics that includes the mean and standard deviations of the power coefficient (CP) , and thrust coefficient (CT) will be presented and compared to the case of a laminar free stream with FST levels <1%.
7 CFR 400.204 - Notification of deviation from standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...
NASA Astrophysics Data System (ADS)
Pitoňák, Martin; Šprlák, Michal; Tenzer, Robert
2017-05-01
We investigate a numerical performance of four different schemes applied to a regional recovery of the gravity anomalies from the third-order gravitational tensor components (assumed to be observable in the future) synthetized at the satellite altitude of 200 km above the mean sphere. The first approach is based on applying a regional inversion without modelling the far-zone contribution or long-wavelength support. In the second approach we separate integral formulas into two parts, that is, the effects of the third-order disturbing tensor data within near and far zones. Whereas the far-zone contribution is evaluated by using existing global geopotential model (GGM) with spectral weights given by truncation error coefficients, the near-zone contribution is solved by applying a regional inversion. We then extend this approach for a smoothing procedure, in which we remove the gravitational contributions of the topographic-isostatic and atmospheric masses. Finally, we apply the remove-compute-restore (r-c-r) scheme in order to reduce the far-zone contribution by subtracting the reference (long-wavelength) gravity field, which is computed for maximum degree 80. We apply these four numerical schemes to a regional recovery of the gravity anomalies from individual components of the third-order gravitational tensor as well as from their combinations, while applying two different levels of a white noise. We validated our results with respect to gravity anomalies evaluated at the mean sphere from EGM2008 up to the degree 250. Not surprisingly, better fit in terms of standard deviation (STD) was attained using lower level of noise. The worst results were gained applying classical approach, STD values of our solution from Tzzz are 1.705 mGal (noise value with a standard deviation 0.01 × 10 - 15m - 1s - 2) and 2.005 mGal (noise value with a standard deviation 0.05 × 10 - 15m - 1s - 2), while the superior from r-c-r up to the degree 80, STD fit of gravity anomalies from Tzzz with respect to the same counterpart from EGM2008 is 0.510 mGal (noise value with a standard deviation 0.01 × 10 - 15m - 1s - 2) and 1.190 mGal (noise value with a standard deviation 0.05 × 10 - 15m - 1s - 2).
Sun, Wenqing; Chen, Lei; Tuya, Wulan; He, Yong; Zhu, Rihong
2013-12-01
Chebyshev and Legendre polynomials are frequently used in rectangular pupils for wavefront approximation. Ideally, the dataset completely fits with the polynomial basis, which provides the full-pupil approximation coefficients and the corresponding geometric aberrations. However, if there are horizontal translation and scaling, the terms in the original polynomials will become the linear combinations of the coefficients of the other terms. This paper introduces analytical expressions for two typical situations after translation and scaling. With a small translation, first-order Taylor expansion could be used to simplify the computation. Several representative terms could be selected as inputs to compute the coefficient changes before and after translation and scaling. Results show that the outcomes of the analytical solutions and the approximated values under discrete sampling are consistent. With the computation of a group of randomly generated coefficients, we contrasted the changes under different translation and scaling conditions. The larger ratios correlate the larger deviation from the approximated values to the original ones. Finally, we analyzed the peak-to-valley (PV) and root mean square (RMS) deviations from the uses of the first-order approximation and the direct expansion under different translation values. The results show that when the translation is less than 4%, the most deviated 5th term in the first-order 1D-Legendre expansion has a PV deviation less than 7% and an RMS deviation less than 2%. The analytical expressions and the computed results under discrete sampling given in this paper for the multiple typical function basis during translation and scaling in the rectangular areas could be applied in wavefront approximation and analysis.
Buhr, H; Büermann, L; Gerlach, M; Krumrey, M; Rabus, H
2012-12-21
For the first time the absolute photon mass energy-absorption coefficient of air in the energy range of 10 to 60 keV has been measured with relative standard uncertainties below 1%, considerably smaller than those of up to 2% assumed for calculated data. For monochromatized synchrotron radiation from the electron storage ring BESSY II both the radiant power and the fraction of power deposited in dry air were measured using a cryogenic electrical substitution radiometer and a free air ionization chamber, respectively. The measured absorption coefficients were compared with state-of-the art calculations and showed an average deviation of 2% from calculations by Seltzer. However, they agree within 1% with data calculated earlier by Hubbell. In the course of this work, an improvement of the data analysis of a previous experimental determination of the mass energy-absorption coefficient of air in the range of 3 to 10 keV was found to be possible and corrected values of this preceding study are given.
Liu, Jinbao; Han, Jichang; Zhang, Yang; Wang, Huanyuan; Kong, Hui; Shi, Lei
2018-06-05
The storage of soil organic carbon (SOC) should improve soil fertility. Conventional determination of SOC is expensive and tedious. Visible-near infrared reflectance spectroscopy is a practical and cost-effective approach that has been successfully used SOC concentration. Soil spectral inversion model could quickly and efficiently determine SOC content. This paper presents a study dealing with SOC estimation through the combination of soil spectroscopy and stepwise multiple linear regression (SMLR), partial least squares regression (PLSR), principal component regression (PCR). Spectral measurements for 106 soil samples were acquired using an ASD FieldSpec 4 standard-res spectroradiometer (350-2500 nm). Six types of transformations and three regression methods were applied to build for the quantification of different parent materials development soil. The results show that (1)the basaltic volcanic clastics development of SOC spectral response bands located in 500 nm, 800 nm; Trachyte spectral response of the soil quality, and the volcanic clastics development at 405 nm, 465 nm, 575 nm, 1105 nm. (2) Basaltic volcanic debris soil development, first deviation of maximum correlation coefficient is 0.8898; thick surface soil of the development of rocky volcanic debris from bottom reflectivity logarithm of first deviation of maximum correlation coefficient is 0.9029. (3) Soil organic matter content of basaltic volcanic clastics development optimal prediction model based on spectral reflectance inverse logarithms of first deviation of SMLR. Independent variable number is 7, Rv 2 = 0.9720, RMSEP = 2.0590, sig = 0.003. Trachyte qualitative volcanic clastics developed soil organic matter content of the optimal prediction model based on spectral reflectance inverse logarithms of first deviation of PLSR. Model number of the independent variables Pc = 5, Rc = 0.9872, Rc 2 = 0.9745, RMSEC = 0.4821, SEC = 0.4906, forecasts determine coefficient Rv 2 = 0.9702, RMSEP = 0.9563, SEP = 0.9711, Bias = 0.0637. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.
Zuo, Ming; Gao, Jieying; Zhang, Xiaoqing; Cui, Yue; Fan, Zimian; Ding, Min
2015-07-01
Capillary electrophoresis with electrochemiluminescence detection for the simultaneous analysis of cisatracurium besylate and its degradation products (laudanosine, quaternary monoacrylate) in pharmaceutical preparation was developed and fully validated. The significant parameters that influence capillary electrophoresis separation and electrochemiluminescence detection were optimized. The total analysis time of the analytes was 15 min. The linearities of the method were 0.1∼40.0 μg/mL for cisatracurium besylate and 0.04∼8.00 μg/mL for laudanosine, with correlation coefficients (r) of 0.999 and 0.998, respectively. The detection limits (S/N = 3) were 83.0 ng/mL for cisatracurium besylate and 32.0 ng/mL for laudanosine. The intraday relative standard deviations of the analytes were <3.0%, and the interday relative standard deviations were <8.0%. The developed method was cost-effective, sensitive, fast, and resource-saving, which was suitable for the ingredient analysis in pharmaceutical preparation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nagarajan, R; Hariharan, M; Satiyan, M
2012-08-01
Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.
The geometry of proliferating dicot cells.
Korn, R W
2001-02-01
The distributions of cell size and cell cycle duration were studied in two-dimensional expanding plant tissues. Plastic imprints of the leaf epidermis of three dicot plants, jade (Crassula argentae), impatiens (Impatiens wallerana), and the common begonia (Begonia semperflorens) were made and cell outlines analysed. The average, standard deviation and coefficient of variance (CV = 100 x standard deviation/average) of cell size were determined with the CV of mother cells less than the CV for daughter cells and both are less than that for all cells. An equation was devised as a simple description of the probability distribution of sizes for all cells of a tissue. Cell cycle durations as measured in arbitrary time units were determined by reconstructing the initial and final sizes of cells and they collectively give the expected asymmetric bell-shaped probability distribution. Given the features of unequal cell division (an average of 11.6% difference in size of daughter cells) and the size variation of dividing cells, it appears that the range of cell size is more critically regulated than the size of a cell at any particular time.
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
NASA Astrophysics Data System (ADS)
Akimoto, Takuma; Yamamoto, Eiji
2016-12-01
Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.
The effect of time in use on the display performance of the iPad.
Caffery, Liam J; Manthey, Kenneth L; Sim, Lawrence H
2016-07-01
The aim of this study was to evaluate changes to the luminance, luminance uniformity and conformance to the digital imaging and communication in medicine greyscale standard display function (GSDF) as a function of time in use for the iPad. Luminance measurements of the American Association of Physicists in Medicine (AAPM) Group 18 task group (TG18) luminance uniformity and luminance test patterns (TG18-UNL and TG18-LN8) were performed using a calibrated near-range luminance meter. Nine sets of measurements were taken, where the time in use of the iPad ranged from 0 to 2500 h. The maximum luminance (Lmax) of the display decreased (367-338 cdm(-2)) as a function of time. The minimum luminance remained constant. The maximum non-uniformity coefficient was 11%. Luminance uniformity decreased slightly as a function of time in use. The conformance of the iPad deviated from the GSDF curve at commencement of use. Deviation did not increase as a function of time in use. This study has demonstrated that the iPad display exhibits luminance degradation typical of liquid crystal displays. The Lmax of the iPad fell below the American College of Radiology-AAPM-Society of Imaging Informatics in Medicine recommendations for primary displays (>350 cdm(-2)) at approximately 1000 h in use. The Lmax recommendation for secondary displays (>250 cdm(-2)) was exceeded during the entire study. The maximum non-uniformity coefficient did not exceed the recommendations for either primary or secondary displays. The deviation from the GSDF exceeded the recommendations of the TG18 for use as either a primary or secondary display. The brightness, uniformity and contrast response are reasonably stable over the useful lifetime of the device; however, the device fails to meet the contrast response standard for either a primary or secondary display.
Standardization of computer-assisted semen analysis using an e-learning application.
Ehlers, J; Behr, M; Bollwein, H; Beyerbach, M; Waberski, D
2011-08-01
Computer-assisted semen analysis (CASA) is primarily used to obtain accurate and objective kinetic sperm measurements. Additionally, AI centers use computer-assessed sperm concentration in the sample as a basis for calculating the number of insemination doses available from a given ejaculate. The reliability of data is often limited and results can vary even when the same CASA systems with identical settings are used. The objective of the present study was to develop a computer-based training module for standardized measurements with a CASA system and to evaluate its training effect on the quality of the assessment of sperm motility and concentration. A digital versatile disc (DVD) has been produced showing the standardization of sample preparation and analysis with the CASA system SpermVision™ version 3.0 (Minitube, Verona, WI, USA) in words, pictures, and videos, as well as the most probable sources of error. Eight test persons educated in spermatology, but with different levels of experience with the CASA system, prepared and assessed 10 aliquots from one prediluted bull ejaculate using the same CASA system and laboratory equipment before and after electronic learning (e-learning). After using the e-learning application, the coefficient of variation was reduced on average for the sperm concentration from 26.1% to 11.3% (P ≤ 0.01), and for motility from 5.8% to 3.1% (P ≤ 0.05). For five test persons, the difference in the coefficient of variation before and after use of the e-learning application was significant (P ≤ 0.05). Individual deviations of means from the group mean before e-learning were reduced compared with individual deviations from the group mean after e-learning. According to a survey, the e-learning application was highly accepted by users. In conclusion, e-learning presents an effective, efficient, and accepted tool for improvement of the precision of CASA measurements. This study provides a model for the standardization of other laboratory procedures using e-learning. Copyright © 2011 Elsevier Inc. All rights reserved.
Palta, Mari; Chen, Han-Yang; Kaplan, Robert M; Feeny, David; Cherepanov, Dasha; Fryback, Dennis G
2011-01-01
Standard errors of measurement (SEMs) of health-related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics, and is a component of reliability. To estimate the SEM of 5 HRQoL indexes. The National Health Measurement Study (NHMS) was a population-based survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures. A total of 3844 randomly selected adults from the noninstitutionalized population aged 35 to 89 y in the contiguous United States and 265 cataract patients. The SF6-36v2™, QWB-SA, EQ-5D, HUI2, and HUI3 were included. An item-response theory approach captured joint variation in indexes into a composite construct of health (theta). The authors estimated 1) the test-retest standard deviation (SEM-TR) from COMHS, 2) the structural standard deviation (SEM-S) around theta from NHMS, and 3) reliability coefficients. SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2), and 0.134 (HUI3), whereas SEM-S was 0.071, 0.094, 0.084, 0.074, and 0.117, respectively. These yield reliability coefficients 0.66 (COMHS) and 0.71 (NHMS) for SF-6D, 0.59 and 0.64 for QWB-SA, 0.61 and 0.70 for EQ-5D, 0.64 and 0.80 for HUI2, and 0.75 and 0.77 for HUI3, respectively. The SEM varied across levels of health, especially for HUI2, HUI3, and EQ-5D, and was influenced by ceiling effects. Limitations. Repeated measures were 5 mo apart, and estimated theta contained measurement error. The 2 types of SEM are similar and substantial for all the indexes and vary across health.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomita, Tomohiko; Yanai, Michio
The link between the Asian monsoon and the El Nino/Southern Oscillation (ENSO) has been demonstrated by a number of studies. This study examines two ENSO withdrawal periods and discusses if the Asian monsoon played a role in the differences between them. The 1986 event occurred in the later half of 1986 and retreated in 1988. The 1951 and 1991 events were similar to each other and seemed to continue to the second year after onset and not to have the clear La Nina phase after the events. In the central and eastern Pacific, three variables progress in phase as themore » ENSO cycle: sea surface temperature (SST), heat source (Q1), and divergence. Correlation coefficients were calculated and examined with the mean SST on the equator and with the standard deviation of the interannual components of SST. In the central and eastern Pacific, the standard deviation is large and three correlation coefficients are large (over 0.6). Strong air-sea interaction associated with ENSO cycle is deduced. In the Indian Ocean and the western Pacific, the correlation coefficients with SST become small rapidly, while the correlation coefficient between Q1 and the divergence is still large. The interannual variability of SSt may not be crucial for those of Q1 and of the divergence in this region because of the potential to generate well organized convection through the high mean SST. This suggests that various factors, such as effects from mid-latitudes, may modify the interannual variability in the region. To examine the effects of the Asian winter monsoon, the anomalous wind field at 850 hPa was investigated. The conditions of the Asian winter monsoon were quite different between the withdrawal periods in the 1986 and 1991 ENSO events. The Asian winter monsoon seems to be a factor to modify the ENSO cycle, especially in the retreat periods. In addition, the SST from the tropical Indian Ocean to western Pacific may be important for the modulation of the ENSO/monsoon system. 9 refs., 10 figs.« less
Pedersen, T V; Olsen, D R; Skretting, A
1997-08-01
A method has been developed to determine the diffusion coefficients of ferric ions in ferrous sulphate doped gels. A radiation induced edge was created in the gel, and two spin-echo sequences were used to acquire a pair of images of the gel at different points of time. For each of these image pairs, a longitudinal relaxation rate image was derived. From profiles through these images, the standard deviations of the Gaussian functions that characterize diffusion were determined. These data provided the basis for the determination of the ferric diffusion coefficients by two different methods. Simulations indicate that the use of single spin-echo images in this procedure may in some cases lead to a significant underestimation of the diffusion coefficient. The technique was applied to different agarose and gelatine gels that were prepared, irradiated and imaged simultaneously. The results indicate that the diffusion coefficient is lower in a gelatine gel than in an agarose gel. Addition of xylenol orange to a gelatine gel lowers the diffusion coefficient from 1.45 to 0.81 mm2 h-1, at the cost of significantly lower Rl sensitivity. The addition of benzoic acid to the latter gel did not increase the Rl sensitivity.
Estimated Probability of a Cervical Spine Injury During an ISS Mission
NASA Technical Reports Server (NTRS)
Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.
2013-01-01
Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a Monte Carlo wrapper (MATLAB) used to integrate the components of the module. Results: The probability of generating an AIS1 soft tissue neck injury from the extension/flexion motion induced by a low-velocity blunt impact to the superior posterior thorax was fitted with a lognormal PDF with mean 0.26409, standard deviation 0.11353, standard error of mean 0.00114, and 95% confidence interval [0.26186, 0.26631]. Combining the probability of an AIS1 injury with the probability of IES occurrence was fitted with a Johnson SI PDF with mean 0.02772, standard deviation 0.02012, standard error of mean 0.00020, and 95% confidence interval [0.02733, 0.02812]. The input factor sensitivity analysis in descending order was IES incidence rate, ITF regression coefficient 1, impactor initial velocity, ITF regression coefficient 2, and all others (equipment mass, crew 1 body mass, crew 2 body mass) insignificant. Verification and Validation (V&V): The IMM V&V, based upon NASA STD 7009, was implemented which included an assessment of the data sets used to build CSIM. The documentation maintained includes source code comments and a technical report. The software code and documentation is under Subversion configuration management. Kinematic validation was performed by comparing the biomechanical model output to established corridors.
A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.
Tian, Siyu; Huang, Xiaoxia; Li, Hongga
2017-03-15
Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube
NASA Astrophysics Data System (ADS)
Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng
2017-07-01
Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.
Vispoel, Walter P; Kim, Han Yi
2014-09-01
[Correction Notice: An Erratum for this article was reported in Vol 26(3) of Psychological Assessment (see record 2014-16017-001). The mean, standard deviation and alpha coefficient originally reported in Table 1 should be 74.317, 10.214 and .802, respectively. The validity coefficients in the last column of Table 4 are affected as well. Correcting this error did not change the substantive interpretations of the results, but did increase the mean, standard deviation, alpha coefficient, and validity coefficients reported for the Honesty subscale in the text and in Tables 1 and 4. The corrected versions of Tables 1 and Table 4 are shown in the erratum.] Item response theory (IRT) models were applied to dichotomous and polytomous scoring of the Self-Deceptive Enhancement and Impression Management subscales of the Balanced Inventory of Desirable Responding (Paulhus, 1991, 1999). Two dichotomous scoring methods reflecting exaggerated endorsement and exaggerated denial of socially desirable behaviors were examined. The 1- and 2-parameter logistic models (1PLM, 2PLM, respectively) were applied to dichotomous responses, and the partial credit model (PCM) and graded response model (GRM) were applied to polytomous responses. For both subscales, the 2PLM fit dichotomous responses better than did the 1PLM, and the GRM fit polytomous responses better than did the PCM. Polytomous GRM and raw scores for both subscales yielded higher test-retest and convergent validity coefficients than did PCM, 1PLM, 2PLM, and dichotomous raw scores. Information plots showed that the GRM provided consistently high measurement precision that was superior to that of all other IRT models over the full range of both construct continuums. Dichotomous scores reflecting exaggerated endorsement of socially desirable behaviors provided noticeably weak precision at low levels of the construct continuums, calling into question the use of such scores for detecting instances of "faking bad." Dichotomous models reflecting exaggerated denial of the same behaviors yielded much better precision at low levels of the constructs, but it was still less precision than that of the GRM. These results support polytomous over dichotomous scoring in general, alternative dichotomous scoring for detecting faking bad, and extension of GRM scoring to situations in which IRT offers additional practical advantages over classical test theory (adaptive testing, equating, linking, scaling, detecting differential item functioning, and so forth). PsycINFO Database Record (c) 2014 APA, all rights reserved.
Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye
2016-01-13
A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.
7 CFR 400.174 - Notification of deviation from financial standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...
Method of estimating flood-frequency parameters for streams in Idaho
Kjelstrom, L.C.; Moffatt, R.L.
1981-01-01
Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
Isaksen, Jonas; Leber, Remo; Schmid, Ramun; Schmid, Hans-Jakob; Generali, Gianluca; Abächerli, Roger
2017-02-01
The first-order high-pass filter (AC coupling) has previously been shown to affect the ECG for higher cut-off frequencies. We seek to find a systematic deviation in computer measurements of the electrocardiogram when the AC coupling with a 0.05 Hz first-order high-pass filter is used. The standard 12-lead electrocardiogram from 1248 patients and the automated measurements of their DC and AC coupled version were used. We expect a large unipolar QRS-complex to produce a deviation in the opposite direction in the ST-segment. We found a strong correlation between the QRS integral and the offset throughout the ST-segment. The coefficient for J amplitude deviation was found to be -0.277 µV/(µV⋅s). Potential dangerous alterations to the diagnostically important ST-segment were found. Medical professionals and software developers for electrocardiogram interpretation programs should be aware of such high-pass filter effects since they could be misinterpreted as pathophysiology or some pathophysiology could be masked by these effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Deviations from LTE in a stellar atmosphere
NASA Technical Reports Server (NTRS)
Kalkofen, W.; Klein, R. I.; Stein, R. F.
1979-01-01
Deviations for LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient is smaller than unity when the radiative cross section grows with frequency faster than with the square of frequency; it exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of the radiative cross section. Overpopulation always implies that the kinetic temperature in the statistical-equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature.
Dedicated vertical wind tunnel for the study of sedimentation of non-spherical particles.
Bagheri, G H; Bonadonna, C; Manzella, I; Pontelandolfo, P; Haas, P
2013-05-01
A dedicated 4-m-high vertical wind tunnel has been designed and constructed at the University of Geneva in collaboration with the Groupe de compétence en mécanique des fluides et procédés énergétiques. With its diverging test section, the tunnel is designed to study the aero-dynamical behavior of non-spherical particles with terminal velocities between 5 and 27 ms(-1). A particle tracking velocimetry (PTV) code is developed to calculate drag coefficient of particles in standard conditions based on the real projected area of the particles. Results of our wind tunnel and PTV code are validated by comparing drag coefficient of smooth spherical particles and cylindrical particles to existing literature. Experiments are repeatable with average relative standard deviation of 1.7%. Our preliminary experiments on the effect of particle to fluid density ratio on drag coefficient of cylindrical particles show that the drag coefficient of freely suspended particles in air is lower than those measured in water or in horizontal wind tunnels. It is found that increasing aspect ratio of cylindrical particles reduces their secondary motions and they tend to be suspended with their maximum area normal to the airflow. The use of the vertical wind tunnel in combination with the PTV code provides a reliable and precise instrument for measuring drag coefficient of freely moving particles of various shapes. Our ultimate goal is the study of sedimentation and aggregation of volcanic particles (density between 500 and 2700 kgm(-3)) but the wind tunnel can be used in a wide range of applications.
Li, Fenfang; Wilkens, Lynne R; Novotny, Rachel; Fialkowski, Marie K; Paulino, Yvette C; Nelson, Randall; Bersamin, Andrea; Martin, Ursula; Deenik, Jonathan; Boushey, Carol J
2016-05-01
Anthropometric standardization is essential to obtain reliable and comparable data from different geographical regions. The purpose of this study is to describe anthropometric standardization procedures and findings from the Children's Healthy Living (CHL) Program, a study on childhood obesity in 11 jurisdictions in the US-Affiliated Pacific Region, including Alaska and Hawai'i. Zerfas criteria were used to compare the measurement components (height, waist, and weight) between each trainee and a single expert anthropometrist. In addition, intra- and inter-rater technical error of measurement (TEM), coefficient of reliability, and average bias relative to the expert were computed. From September 2012 to December 2014, 79 trainees participated in at least 1 of 29 standardization sessions. A total of 49 trainees passed either standard or alternate Zerfas criteria and were qualified to assess all three measurements in the field. Standard Zerfas criteria were difficult to achieve: only 2 of 79 trainees passed at their first training session. Intra-rater TEM estimates for the 49 trainees compared well with the expert anthropometrist. Average biases were within acceptable limits of deviation from the expert. Coefficient of reliability was above 99% for all three anthropometric components. Standardization based on comparison with a single expert ensured the comparability of measurements from the 49 trainees who passed the criteria. The anthropometric standardization process and protocols followed by CHL resulted in 49 standardized field anthropometrists and have helped build capacity in the health workforce in the Pacific Region. Am. J. Hum. Biol. 28:364-371, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhao, Tianzhuo; Fan, Zhongwei; Lian, Fuqiang; Liu, Yang; Lin, Weiran; Mo, Zeqiang; Nie, Shuzhen; Wang, Pu; Xiao, Hong; Li, Xin; Zhong, Qixiu; Zhang, Hongbo
2017-11-01
Laser-induced breakdown spectroscopy (LIBS) utilizing an echelle spectrograph-ICCD system is employed for on-line analysis of elements concentration in a vacuum induction melting workshop. Active temperature stabilization of echelle spectrometer is implemented specially for industrial environment applications. The measurement precision is further improved by monitoring laser parameters, such as pulse energy, spatial and temporal profiles, in real time, and post-selecting laser pulses with specific pulse energies. Experimental results show that major components of nickel-based alloys are stable, and can be well detected. By using internal standard method, calibration curves for chromium and aluminum are obtained for quantitative determination, with determination coefficient (relative standard deviation) to be 0.9559 (< 2.2%) and 0.9723 (< 2.8%), respectively.
A New Approach to Extract Forest Water Use Efficiency from Eddy Covariance Data
NASA Astrophysics Data System (ADS)
Scanlon, T. M.; Sulman, B. N.
2016-12-01
Determination of forest water use efficiency (WUE) from eddy covariance data typically involves the following steps: (a) estimating gross primary productivity (GPP) from direct measurements of net ecosystem exchange (NEE) by extrapolating nighttime ecosystem respiration (ER) to daytime conditions, and (b) assuming direct evaporation (E) is minimal several days after rainfall, meaning that direct measurements of evapotranspiration (ET) are identical to transpiration (T). Both of these steps could lead to errors in the estimation of forest WUE. Here, we present a theoretical approach for estimating WUE through the analysis of standard eddy covariance data, which circumvents these steps. Only five statistics are needed from the high-frequency time series to extract WUE: CO2 flux, water vapor flux, standard deviation in CO2 concentration, standard deviation in water vapor concentration, and the correlation coefficient between CO2 and water vapor concentration for each half-hour period. The approach is based on the assumption that stomatal fluxes (i.e. photosynthesis and transpiration) lead to perfectly negative correlations and non-stomatal fluxes (i.e. ecosystem respiration and direct evaporation) lead to perfectly positive correlations within the CO2 and water vapor high frequency time series measured above forest canopies. A mathematical framework is presented, followed by a proof of concept using eddy covariance data and leaf-level measurements of WUE.
Silva, Andressa; Mello, Marco T.; Serrão, Paula R.; Luz, Roberta P.; Bittencourt, Lia R.; Mattiello, Stela M.
2015-01-01
OBJECTIVE: The aim of this study was to investigate whether obstructive sleep apnea (OSA) alters the fluctuation of submaximal isometric torque of the knee extensors in patients with early-grade osteoarthritis (OA). METHOD: The study included 60 male volunteers, aged 40 to 70 years, divided into four groups: Group 1 (G1) - Control (n=15): without OA and without OSA; Group 2 (G2) (n=15): with OA and without OSA; Group 3 (G3) (n=15): without OA and with OSA; and Group 4 (G4) (n=15) with OA and with OSA. Five patients underwent maximal isometric contractions of 10 seconds duration each, with the knee at 60° of flexion to determine peak torque at 60°. To evaluate the fluctuation of torque, 5 submaximal isometric contractions (50% of maximum peak torque) of 10 seconds each, which were calculated from the standard deviation of torque and coefficient of variation, were performed. RESULTS: Significant differences were observed between groups for maximum peak torque, while G4 showed a lower value compared with G1 (p=0.005). Additionally, for the average torque exerted, G4 showed a lower value compared to the G1 (p=0.036). However, no differences were found between the groups for the standard deviation (p=0.844) and the coefficient of variation (p=0.143). CONCLUSION: The authors concluded that OSA did not change the parameters of the fluctuation of isometric submaximal torque of knee extensors in patients with early-grade OA. PMID:26443974
NASA Astrophysics Data System (ADS)
Singh, Gurjeet; Panda, Rabindra K.; Mohanty, Binayak P.; Jana, Raghavendra B.
2016-05-01
Strategic ground-based sampling of soil moisture across multiple scales is necessary to validate remotely sensed quantities such as NASA's Soil Moisture Active Passive (SMAP) product. In the present study, in-situ soil moisture data were collected at two nested scale extents (0.5 km and 3 km) to understand the trend of soil moisture variability across these scales. This ground-based soil moisture sampling was conducted in the 500 km2 Rana watershed situated in eastern India. The study area is characterized as sub-humid, sub-tropical climate with average annual rainfall of about 1456 mm. Three 3x3 km square grids were sampled intensively once a day at 49 locations each, at a spacing of 0.5 km. These intensive sampling locations were selected on the basis of different topography, soil properties and vegetation characteristics. In addition, measurements were also made at 9 locations around each intensive sampling grid at 3 km spacing to cover a 9x9 km square grid. Intensive fine scale soil moisture sampling as well as coarser scale samplings were made using both impedance probes and gravimetric analyses in the study watershed. The ground-based soil moisture samplings were conducted during the day, concurrent with the SMAP descending overpass. Analysis of soil moisture spatial variability in terms of areal mean soil moisture and the statistics of higher-order moments, i.e., the standard deviation, and the coefficient of variation are presented. Results showed that the standard deviation and coefficient of variation of measured soil moisture decreased with extent scale by increasing mean soil moisture.
1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...
GPFA-AB_Phase1GeologicReservoirsContentModel10_26_2015.xls
Teresa E. Jordan
2015-09-30
This dataset conforms to the Tier 3 Content Model for Geologic Reservoirs Version 1.0. It contains the known hydrocarbon reservoirs within the study area of the GPFA-AB Phase 1 Task 2, Natural Reservoirs Quality Analysis (Project DE-EE0006726). The final values for Reservoir Productivity Index (RPI) and uncertainty (in terms of coefficient of variation, CV) are included. RPI is in units of liters per MegaPascal-second (L/MPa-s), quantified using permeability, thickness of formation, and depth. A higher RPI is more optimal. Coefficient of Variation (CV) is the ratio of the standard deviation to the mean RPI for each reservoir. A lower CV is more optimal. Details on these metrics can be found in the Reservoirs_Methodology_Memo.pdf uploaded to the Geothermal Data Repository Node of the NGDS in October of 2015.
Broken Ergodicity in Ideal, Homogeneous, Incompressible Turbulence
NASA Technical Reports Server (NTRS)
Morin, Lee; Shebalin, John; Fu, Terry; Nguyen, Phu; Shum, Victor
2010-01-01
We discuss the statistical mechanics of numerical models of ideal homogeneous, incompressible turbulence and their relevance for dissipative fluids and magnetofluids. These numerical models are based on Fourier series and the relevant statistical theory predicts that Fourier coefficients of fluid velocity and magnetic fields (if present) are zero-mean random variables. However, numerical simulations clearly show that certain coefficients have a non-zero mean value that can be very large compared to the associated standard deviation. We explain this phenomena in terms of broken ergodicity', which is defined to occur when dynamical behavior does not match ensemble predictions on very long time-scales. We review the theoretical basis of broken ergodicity, apply it to 2-D and 3-D fluid and magnetohydrodynamic simulations of homogeneous turbulence, and show new results from simulations using GPU (graphical processing unit) computers.
Radiometric calibration and SNR calculation of a SWIR imaging telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Ozgur; Turk, Fethi; Selimoglu, Ozgur
2012-09-06
Radiometric calibration of an imaging telescope is usually made using a uniform illumination sphere in a laboratory. In this study, we used the open-sky images taken during bright day conditions to calibrate our telescope. We found a dark signal offset value and a linear response coefficient value for each pixel by using three different algorithms. Then we applied these coefficients to the taken images, and considerably lowered the image non-uniformity. Calibration can be repeated during the operation of telescope with an object that has better uniformity than open-sky. Also SNR (Signal to Noise Ratio) of each pixel was calculated frommore » the open-sky images using the temporal mean and standard deviations. It is found that SNR is greater than 80 for all pixels even at low light levels.« less
Zhu, Hui; Yang, Ri-Fang; Yun, Liu-Hong; Jiang, Yu; Li, Jin
2009-09-01
This paper is to establish a reversed-phase ion-pair chromatography (RP-IPC) method for universal estimation of the octanol/water partition coefficients (logP) of a wide range of structurally diverse compounds including acidic, basic, neutral and amphoteric species. The retention factors corresponding to 100% water (logk(w)) were derived from the linear part of the logk'/phi relationship, using at least four isocratic logk' values containing different organic compositions. The logk(w) parameters obtained were close to the corresponding logP values obtained with the standard "shake flask" methods. The mean deviation for test drugs is 0.31. RP-IPC with trifluoroacetic acid as non classic ion-pair agents can be applicable to determine the logP values for a variety of drug-like molecules with increased accuracy.
Wagner, Bjoern; Fischer, Holger; Kansy, Manfred; Seelig, Anna; Assmus, Frauke
2015-02-20
Here we present a miniaturized assay, referred to as Carrier-Mediated Distribution System (CAMDIS) for fast and reliable measurement of octanol/water distribution coefficients, log D(oct). By introducing a filter support for octanol, phase separation from water is facilitated and the tendency of emulsion formation (emulsification) at the interface is reduced. A guideline for the best practice of CAMDIS is given, describing a strategy to manage drug adsorption at the filter-supported octanol/buffer interface. We validated the assay on a set of 52 structurally diverse drugs with known shake flask log D(oct) values. Excellent agreement with literature data (r(2) = 0.996, standard error of estimate, SEE = 0.111), high reproducibility (standard deviation, SD < 0.1 log D(oct) units), minimal sample consumption (10 μL of 100 μM DMSO stock solution) and a broad analytical range (log D(oct) range = -0.5 to 4.2) make CAMDIS a valuable tool for the high-throughput assessment of log D(oc)t. Copyright © 2014 Elsevier B.V. All rights reserved.
Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide
1981-02-01
SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway
Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin
2017-05-04
Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.
Measurement of stream channel habitat using sonar
Flug, Marshall; Seitz, Heather; Scott, John
1998-01-01
An efficient and low cost technique using a sonar system was evaluated for describing channel geometry and quantifying inundated area in a large river. The boat-mounted portable sonar equipment was used to record water depths and river width measurements for direct storage on a laptop computer. The field data collected from repeated traverses at a cross-section were evaluated to determine the precision of the system and field technique. Results from validation at two different sites showed average sample standard deviations (S.D.s) of 0.12 m for these complete cross-sections, with coefficient of variations of 10%. Validation using only the mid-channel river cross-section data yields an average sample S.D. of 0.05 m, with a coefficient of variation below 5%, at a stable and gauged river site using only measurements of water depths greater than 0.6 m. Accuracy of the sonar system was evaluated by comparison to traditionally surveyed transect data from a regularly gauged site. We observed an average mean squared deviation of 46.0 cm2, considering only that portion of the cross-section inundated by more than 0.6 m of water. Our procedure proved to be a reliable, accurate, safe, quick, and economic method to record river depths, discharges, bed conditions, and substratum composition necessary for stream habitat studies.
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Genheden, Samuel
2017-10-01
We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.
NASA Astrophysics Data System (ADS)
Genheden, Samuel
2017-10-01
We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.
NASA Astrophysics Data System (ADS)
Rock, N. M. S.
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
Basic life support: evaluation of learning using simulation and immediate feedback devices1.
Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi
2017-10-30
to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.
Lorentz Symmetry Violations from Matter-Gravity Couplings with Lunar Laser Ranging
NASA Astrophysics Data System (ADS)
Bourgoin, A.; Le Poncin-Lafitte, C.; Hees, A.; Bouquillon, S.; Francou, G.; Angonin, M.-C.
2017-11-01
The standard-model extension (SME) is an effective field theory framework aiming at parametrizing any violation to the Lorentz symmetry (LS) in all sectors of physics. In this Letter, we report the first direct experimental measurement of SME coefficients performed simultaneously within two sectors of the SME framework using lunar laser ranging observations. We consider the pure gravitational sector and the classical point-mass limit in the matter sector of the minimal SME. We report no deviation from general relativity and put new realistic stringent constraints on LS violations improving up to 3 orders of magnitude previous estimations.
Income inequality, parental socioeconomic status, and birth outcomes in Japan.
Fujiwara, Takeo; Ito, Jun; Kawachi, Ichiro
2013-05-15
The purpose of this study was to investigate the impact of income inequality and parental socioeconomic status on several birth outcomes in Japan. Data were collected on birth outcomes and parental socioeconomic status by questionnaire from Japanese parents nationwide (n = 41,499) and then linked to Gini coefficients at the prefectural level in 2001. In multilevel analysis, z scores of birth weight for gestational age decreased by 0.018 (95% confidence interval (CI): -0.029, -0.006) per 1-standard-deviation (0.018-unit) increase in the Gini coefficient, while gestational age at delivery was not associated with the Gini coefficient. For dichotomous outcomes, mothers living in prefectures with middle and high Gini coefficients were 1.24 (95% CI: 1.05, 1.47) and 1.23 (95% CI: 1.02, 1.48) times more likely, respectively, to deliver a small-for-gestational-age infant than mothers living in more egalitarian prefectures (low Gini coefficients), although preterm births were not significantly associated with income distribution. Parental educational level, but not household income, was significantly associated with the z score of birth weight for gestational age and small-for-gestational-age status. Higher income inequality at the prefectural level and parental educational level, rather than household income, were associated with intrauterine growth but not with shorter gestational age at delivery.
Rosenberry, Donald O.; Stannard, David L.; Winter, Thomas C.; Martinez, Margo L.
2004-01-01
Evapotranspiration determined using the energy-budget method at a semi-permanent prairie-pothole wetland in east-central North Dakota, USA was compared with 12 other commonly used methods. The Priestley-Taylor and deBruin-Keijman methods compared best with the energy-budget values; mean differences were less than 0.1 mm d−1, and standard deviations were less than 0.3 mm d−1. Both methods require measurement of air temperature, net radiation, and heat storage in the wetland water. The Penman, Jensen-Haise, and Brutsaert-Stricker methods provided the next-best values for evapotranspiration relative to the energy-budget method. The mass-transfer, deBruin, and Stephens-Stewart methods provided the worst comparisons; the mass-transfer and deBruin comparisons with energy-budget values indicated a large standard deviation, and the deBruin and Stephens-Stewart comparisons indicated a large bias. The Jensen-Haise method proved to be cost effective, providing relatively accurate comparisons with the energy-budget method (mean difference=0.44 mm d−1, standard deviation=0.42 mm d−1) and requiring only measurements of air temperature and solar radiation. The Mather (Thornthwaite) method is the simplest, requiring only measurement of air temperature, and it provided values that compared relatively well with energy-budget values (mean difference=0.47 mm d−1, standard deviation=0.56 mm d−1). Modifications were made to several of the methods to make them more suitable for use in prairie wetlands. The modified Makkink, Jensen-Haise, and Stephens-Stewart methods all provided results that were nearly as close to energy-budget values as were the Priestley-Taylor and deBruin-Keijman methods, and all three of these modified methods only require measurements of air temperature and solar radiation. The modified Hamon method provided values that were within 20 percent of energy-budget values during 95 percent of the comparison periods, and it only requires measurement of air temperature. The mass-transfer coefficient, associated with the commonly used mass-transfer method, varied seasonally, with the largest values occurring during summer.
X-ray dual energy spectral parameter optimization for bone Calcium/Phosphorus mass ratio estimation
NASA Astrophysics Data System (ADS)
Sotiropoulou, P. I.; Fountos, G. P.; Martini, N. D.; Koukou, V. N.; Michail, C. M.; Valais, I. G.; Kandarakis, I. S.; Nikiforidis, G. C.
2015-09-01
Calcium (Ca) and Phosphorus (P) bone mass ratio has been identified as an important, yet underutilized, risk factor in osteoporosis diagnosis. The purpose of this simulation study is to investigate the use of effective or mean mass attenuation coefficient in Ca/P mass ratio estimation with the use of a dual-energy method. The investigation was based on the minimization of the accuracy of Ca/P ratio, with respect to the Coefficient of Variation of the ratio. Different set-ups were examined, based on the K-edge filtering technique and single X-ray exposure. The modified X-ray output was attenuated by various Ca/P mass ratios resulting in nine calibration points, while keeping constant the total bone thickness. The simulated data were obtained considering a photon counting energy discriminating detector. The standard deviation of the residuals was used to compare and evaluate the accuracy between the different dual energy set-ups. The optimum mass attenuation coefficient for the Ca/P mass ratio estimation was the effective coefficient in all the examined set-ups. The variation of the residuals between the different set-ups was not significant.
Hoche, S; Hussein, M A; Becker, T
2015-03-01
The accuracy of density, reflection coefficient, and acoustic impedance determination via multiple reflection method was validated experimentally. The ternary system water-maltose-ethanol was used to execute a systematic, temperature dependent study over a wide range of densities and viscosities aiming an application as inline sensor in beverage industries. The validation results of the presented method and setup show root mean square errors of: 1.201E-3 g cm(-3) (±0.12%) density, 0.515E-3 (0.15%) reflection coefficient and 1.851E+3 kg s(-1) m(-2) (0.12%) specific acoustic impedance. The results of the diffraction corrected absorption showed an average standard deviation of only 0.12%. It was found that the absorption change shows a good correlation to concentration variations and may be useful for laboratory analysis of sufficiently pure liquids. The main part of the observed errors can be explained by the observed noise, temperature variation and the low signal resolution of 50 MHz. In particular, the poor signal-to-noise ratio of the second reflector echo was found to be a main accuracy limitation. Concerning the investigation of liquids the unstable properties of the reference material PMMA, due to hygroscopicity, were identified to be an additional, unpredictable source of uncertainty. While dimensional changes can be considered by adequate methodology, the impact of the time and temperature dependent water absorption on relevant reference properties like the buffer's sound velocity and density could not be considered and may explain part of the observed deviations. Copyright © 2014 Elsevier B.V. All rights reserved.
Scalar Resonant Relaxation of Stars around a Massive Black Hole
NASA Astrophysics Data System (ADS)
Bar-Or, Ben; Fouvry, Jean-Baptiste
2018-06-01
In nuclear star clusters, the potential is governed by the central massive black hole (MBH), so that stars move on nearly Keplerian orbits and the total potential is almost stationary in time. Yet, the deviations of the potential from the Keplerian one, due to the enclosed stellar mass and general relativity, will cause the stellar orbits to precess. Moreover, as a result of the finite number of stars, small deviations of the potential from spherical symmetry induce residual torques that can change the stars’ angular momentum faster than the standard two-body relaxation. The combination of these two effects drives a stochastic evolution of orbital angular momentum, a process named “resonant relaxation” (RR). Owing to recent developments in the description of the relaxation of self-gravitating systems, we can now fully describe scalar resonant relaxation (relaxation of the magnitude of the angular momentum) as a diffusion process. In this framework, the potential fluctuations due to the complex orbital motion of the stars are described by a random correlated noise with statistical properties that are fully characterized by the stars’ mean field motion. On long timescales, the cluster can be regarded as a diffusive system with diffusion coefficients that depend explicitly on the mean field stellar distribution through the properties of the noise. We show here, for the first time, how the diffusion coefficients of scalar RR, for a spherically symmetric system, can be fully calculated from first principles, without any free parameters. We also provide an open source code that evaluates these diffusion coefficients numerically.
Iutaka, Natalia A; Grochowski, Rubens A; Kasahara, Niro
2017-01-01
To evaluate the correlation between visual field index (VFI) and both structural and functional measures of the optic disc in primary open angle glaucoma patients and suspects. In this retrospective study, 162 glaucoma patients and suspects underwent standard automated perimetry (SAP), retinography, and retinal nerve fiber layer (RNFL) measurement. The optic disc was stratified according to the vertical cup/disc ratio (C/D) and sorted by the disc damage likelihood scale (DDLS). RNFL was measured with the optical coherence tomography. The VFI perimetry was correlated with the mean deviation (MD) and pattern standard deviation (PSD) obtained by SAP, and structural parameters by Pearson's correlation coefficients. VFI displayed strong correlation with MD ( R = 0.959) and PSD ( R = -0.744). The linear correlations between VFI and structural measures including C/D ( R = -0.179, P = 0.012), DDLS ( R = -0.214, P = 0.006), and RNFL ( R = 0.416, P < 0.001) were weak but statistically significant. VFI showed a strong correlation with MD and PSD but demonstrated a weak correlation with structural measures. It can possibly be used as a marker for functional impairment severity in patients with glaucoma.
Sun, Ting; Sun, Hefeng; Zhao, Feng
2017-09-01
In this work, reduced graphene oxide coated with ZnO nanocomposites was used as an efficient sorbent of dispersive solid-phase extraction and successfully applied for the extraction of organochlorine pesticides from apple juice followed by gas chromatography with mass spectrometry. Several experimental parameters affecting the extraction efficiencies, including the amount of adsorbent, extraction time, and the pH of the sample solution, as well as the type and volume of eluent solvent, were investigated and optimized. Under the optimal experimental conditions, good linearity existed in the range of 1.0-200.0 ng/mL for all the analytes with the correlation coefficients (R 2 ) ranging from 0.9964 to 0.9994. The limits of detection of the method for the compounds were 0.011-0.053 ng/mL. Good reproducibilities were acquired with relative standard deviations below 8.7% for both intraday and interday precision. The recoveries of the method were in the range of 78.1-105.8% with relative standard deviations of 3.3-6.9%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
Improved particle position accuracy from off-axis holograms using a Chebyshev model.
Öhman, Johan; Sjödahl, Mikael
2018-01-01
Side scattered light from micrometer-sized particles is recorded using an off-axis digital holographic setup. From holograms, a volume is reconstructed with information about both intensity and phase. Finding particle positions is non-trivial, since poor axial resolution elongates particles in the reconstruction. To overcome this problem, the reconstructed wavefront around a particle is used to find the axial position. The method is based on the change in the sign of the curvature around the true particle position plane. The wavefront curvature is directly linked to the phase response in the reconstruction. In this paper we propose a new method of estimating the curvature based on a parametric model. The model is based on Chebyshev polynomials and is fit to the phase anomaly and compared to a plane wave in the reconstructed volume. From the model coefficients, it is possible to find particle locations. Simulated results show increased performance in the presence of noise, compared to the use of finite difference methods. The standard deviation is decreased from 3-39 μm to 6-10 μm for varying noise levels. Experimental results show a corresponding improvement where the standard deviation is decreased from 18 μm to 13 μm.
Model for threading dislocations in metamorphic tandem solar cells on GaAs (001) substrates
NASA Astrophysics Data System (ADS)
Song, Yifei; Kujofsa, Tedi; Ayers, John E.
2018-02-01
We present an approximate model for the threading dislocations in III-V heterostructures and have applied this model to study the defect behavior in metamorphic triple-junction solar cells. This model represents a new approach in which the coefficient for second-order threading dislocation annihilation and coalescence reactions is considered to be determined by the length of misfit dislocations, LMD, in the structure, and we therefore refer to it as the LMD model. On the basis of this model we have compared the average threading dislocation densities in the active layers of triple junction solar cells using linearly-graded buffers of varying thicknesses as well as S-graded (complementary error function) buffers with varying thicknesses and standard deviation parameters. We have shown that the threading dislocation densities in the active regions of metamorphic tandem solar cells depend not only on the thicknesses of the buffer layers but on their compositional grading profiles. The use of S-graded buffer layers instead of linear buffers resulted in lower threading dislocation densities. Moreover, the threading dislocation densities depended strongly on the standard deviation parameters used in the S-graded buffers, with smaller values providing lower threading dislocation densities.
Temperature and current coefficients of lasing wavelength in tunable diode laser spectroscopy.
Fukuda, M; Mishima, T; Nakayama, N; Masuda, T
2010-08-01
The factors determining temperature and current coefficients of lasing wavelength are investigated and discussed under monitoring CO(2)-gas absorption spectra. The diffusion rate of Joule heating at the active layer to the surrounding region is observed by monitoring the change in the junction voltage, which is a function of temperature and the wavelength (frequency) deviation under sinusoidal current modulation. Based on the experimental results, the time interval of monitoring the wavelength after changing the ambient temperature or injected current (scanning rate) has to be constant at least to eliminate the monitoring error induced by the deviation of lasing wavelength, though the temperature and current coefficients of lasing wavelength differ with the rate.
Qiu, Zhong-Feng; Xi, Hong-Yan; He, Yi-Jun; Chen, Jay-Chung; Jian, Wei-Jun
2006-08-01
For the purpose of detecting and forecasting research of red tides to reduce the loss, a semi-analytic algorithm to retrieve chlorophyll-a concentrations was established in the area where red tides often brought out, according to the data collected during the red tides cruise in the East China Sea in April 2002. In the algorithm, empirical equations were made based on the coefficients from the in-situ data, including the optical properties of the research area. The in-situ data were used to validate the algorithm. The discrepancy of chlorophyll-a absorption coefficients and concentrations are mainly located in the region of 30%. The root mean deviation of the chlorophyll-a concentrations between the observed and the calculated is 0.24, the maximum relative deviation 40.93%, the mean relative deviation 18.83% and the correlation coefficient 0.83. The results show that the precision of the algorithm is high and the algorithm is fit for the research area.
Cuppo, F L S; Gómez, S L; Figueiredo Neto, A M
2004-04-01
In this paper is reported a systematic experimental study of the linear-optical-absorption coefficient of ferrofluid-doped isotropic lyotropic mixtures as a function of the magnetic-grains concentration. The linear optical absorption of ferrolyomesophases increases in a nonlinear manner with the concentration of magnetic grains, deviating from the usual Beer-Lambert law. This behavior is associated to the presence of correlated micelles in the mixture which favors the formation of small-scale aggregates of magnetic grains (dimers), which have a higher absorption coefficient with respect to that of isolated grains. We propose that the indirect heating of the micelles via the ferrofluid grains (hyperthermia) could account for this nonlinear increase of the linear-optical-absorption coefficient as a function of the grains concentration.
Processing of meteorological data with ultrasonic thermoanemometers
NASA Astrophysics Data System (ADS)
Telminov, A. E.; Bogushevich, A. Ya.; Korolkov, V. A.; Botygin, I. A.
2017-11-01
The article describes a software system intended for supporting scientific researches of the atmosphere during the processing of data gathered by multi-level ultrasonic complexes for automated monitoring of meteorological and turbulent parameters in the ground layer of the atmosphere. The system allows to process files containing data sets of temperature instantaneous values, three orthogonal components of wind speed, humidity and pressure. The processing task execution is done in multiple stages. During the first stage, the system executes researcher's query for meteorological parameters. At the second stage, the system computes series of standard statistical meteorological field properties, such as averages, dispersion, standard deviation, asymmetry coefficients, excess, correlation etc. The third stage is necessary to prepare for computing the parameters of atmospheric turbulence. The computation results are displayed to user and stored at hard drive.
Evaluation of CMIP5 twentieth century rainfall simulation over the equatorial East Africa
NASA Astrophysics Data System (ADS)
Ongoma, Victor; Chen, Haishan; Gao, Chujie
2018-02-01
This study assesses the performance of 22 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of rainfall over East Africa (EA) against reanalyzed datasets during 1951-2005. The datasets were sourced from Global Precipitation Climatology Centre (GPCC) and Climate Research Unit (CRU). The metrics used to rank CMIP5 Global Circulation Models (GCMs) based on their performance in reproducing the observed rainfall include correlation coefficient, standard deviation, bias, percentage bias, root mean square error, and trend. Performances of individual models vary widely. The overall performance of the models over EA is generally low. The models reproduce the observed bimodal rainfall over EA. However, majority of them overestimate and underestimate the October-December (OND) and March-May (MAM) rainfall, respectively. The monthly (inter-annual) correlation between model and reanalyzed is high (low). More than a third of the models show a positive bias of the annual rainfall. High standard deviation in rainfall is recorded in the Lake Victoria Basin, central Kenya, and eastern Tanzania. A number of models reproduce the spatial standard deviation of rainfall during MAM season as compared to OND. The top eight models that produce rainfall over EA relatively well are as follows: CanESM2, CESM1-CAM5, CMCC-CESM, CNRM-CM5, CSIRO-Mk3-6-0, EC-EARTH, INMCM4, and MICROC5. Although these results form a fairly good basis for selection of GCMs for carrying out climate projections and downscaling over EA, it is evident that there is still need for critical improvement in rainfall-related processes in the models assessed. Therefore, climate users are advised to use the projections of rainfall from CMIP5 models over EA cautiously when making decisions on adaptation to or mitigation of climate change.
Cramer, Richard D.
2015-01-01
The possible applicability of the new template CoMFA methodology to the prediction of unknown biological affinities was explored. For twelve selected targets, all ChEMBL binding affinities were used as training and/or prediction sets, making these 3D-QSAR models the most structurally diverse and among the largest ever. For six of the targets, X-ray crystallographic structures provided the aligned templates required as input (BACE, cdk1, chk2, carbonic anhydrase-II, factor Xa, PTP1B). For all targets including the other six (hERG, cyp3A4 binding, endocrine receptor, COX2, D2, and GABAa), six modeling protocols applied to only three familiar ligands provided six alternate sets of aligned templates. The statistical qualities of the six or seven models thus resulting for each individual target were remarkably similar. Also, perhaps unexpectedly, the standard deviations of the errors of cross-validation predictions accompanying model derivations were indistinguishable from the standard deviations of the errors of truly prospective predictions. These standard deviations of prediction ranged from 0.70 to 1.14 log units and averaged 0.89 (8x in concentration units) over the twelve targets, representing an average reduction of almost 50% in uncertainty, compared to the null hypothesis of “predicting” an unknown affinity to be the average of known affinities. These errors of prediction are similar to those from Tanimoto coefficients of fragment occurrence frequencies, the predominant approach to side effect prediction, which template CoMFA can augment by identifying additional active structural classes, by improving Tanimoto-only predictions, by yielding quantitative predictions of potency, and by providing interpretable guidance for avoiding or enhancing any specific target response. PMID:26065424
NASA Astrophysics Data System (ADS)
Hast, J.; Myllylä, Risto; Sorvoja, H.; Miettinen, J.
2002-11-01
The self-mixing effect in a diode laser and the Doppler technique are used for quantitative measurements of the cardiovascular pulses from radial arteries of human individuals. 738 cardiovascular pulses from 10 healthy volunteers were studied. The Doppler spectrograms reconstructed from the Doppler signal, which is measured from the radial displacement of the radial artery, are compared to the first derivative of the blood pressure signals measured from the middle finger by the Penaz technique. The mean correlation coefficient between the Doppler spectrograms and the first derivative of the blood pressure signals was 0.84, with a standard deviation of 0.05. Pulses with the correlation coefficient less than 0.7 were neglected in the study. Percentage of successfully detected pulses was 95.7%. It is shown that cardiovascular pulse shape from the radial artery can be measured noninvasively by using the self-mixing interferometry.
Structural and High-Temperature Tensile Properties of Special Pitch-Coke Graphites
NASA Technical Reports Server (NTRS)
Kotlensky, W. V.; Martens, H. E.
1961-01-01
The room-temperature structural properties and the tensile properties up to 5000 F (275O C) were determined for ten grades of specially prepared petroleum-coke coal-tar-pitch graphites which were graphitized at 5430 F (3000 C). One impregnation with coal-tar pitch increased the bulk density from 1.41 to 1.57 g/cm3 and the maximum strength at 4500 F (2500 C) from 4000 to 5700 psi. None of the processing parameters studied had a marked effect on the closed porosity or the X-ray structure or the per cent graphitization. The coarse-particle filler resulted in the lowest coefficient of thermal expansion and the fine-particle filler in the highest coefficient. A marked improvement in uniformity of tensile strength was observed. A standard-deviation analysis gave a one-sigma value of approximately 150 psi for one of these special grades and values of 340-420 psi for three commercial grades.
Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie
2015-01-01
We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
NASA Technical Reports Server (NTRS)
Wang, J. R.; Shiue, J. C.; Engman, E. T.; Rusek, M.; Steinmeier, C.
1986-01-01
An experiment was conducted from an L-band SAR aboard Space Shuttle Challenger in October 1984 to study the microwave backscatter dependence on soil moisture, surface roughness, and vegetation cover. The results based on the analyses of an image obtained at 21-deg incidence angle show a positive correlatlion between scattering coefficient and soil moisture content, with a sensitivity comparable to that derived from the ground radar measurements reported by Ulaby et al. (1978). The surface roughness strongly affects the microwave backscatter. A factor of two change in the standard deviation of surface roughness height gives a corresponding change of about 8 dB in the scattering coefficient. The microwave backscatter also depends on the vegetation types. Under the dry soil conditions, the scattering coefficient is observed to change from about -24 dB for an alfalfa or lettuce field to about -17 dB for a mature corn field. These results suggest that observations with a SAR system of multiple frequencies and polarizations are required to unravel the effects of soil moisture, surface roughness, and vegetation cover.
Vertical eddy diffusion coefficient from the LANDSAT imagery
NASA Technical Reports Server (NTRS)
Viswanadham, Y. (Principal Investigator); Torsani, J. A.
1982-01-01
Analysis of five stable cases of the smoke plumes that originated in eastern Cabo Frio (22 deg 59'S; 42 deg 02'W), Brazil using LANDSAT imagery is presented for different months and years. From these images the lateral standard deviation (sigma sub y) and the lateral eddy diffusion coefficient (K sub y) are obtained from the formula based on Taylor's theory of diffusion by continuous moment. The rate of kinetic energy dissipation (e) is evaluated from the diffusion parameters sigma sub y and K sub y. Then, the vertical diffusion coefficient (K sub z) is estimated using Weinstock's formulation. These results agree well with the previous experimental values obtained over water surfaces by various workers. Values of e and K sub z show the weaker mixing processes in the marine stable boundary layer. The data sample is apparently to small to include representative active turbulent regions because such regions are so intermittent in time and in space. These results form a data base for use in the development and validation of mesoscale atmospheric diffusion models.
Accuracy of water displacement hand volumetry using an ethanol and water mixture.
Hargens, Alan R; Kim, Jong-Moon; Cao, Peihong
2014-02-01
The traditional water displacement method for measuring limb volume is improved by adding ethanol to water. Four solutions were tested (pure water, 0.5% ethanol, 3% ethanol, and 6% ethanol) to determine the most accurate method when measuring the volume of a known object. The 3% and 6% ethanol solutions significantly reduced (P < 0.001) the mean standard deviation of 10 measurements of a known sphere (390.1 +/- 0.25 mi) from 2.27 ml with pure water to 0.9 ml using the 3% alcohol solution and to 0.6 using 6% ethanol solution (the mean coefficients of variation were reduced from 0.59% for water to 0.22% for 3% ethanol and 0.16% for 6% ethanol). The spheres' volume measured with pure water, 0.5% ethanol solution, 3% ethanol solution, and 6% ethanol solution was 383.2 +/- 2.27 ml, 384.4 +/- 1.9 ml, 389.4 +/- 0.9 ml, and 390.2 +/- 0.6 ml, respectively. Using the 3% and 6% ethanol solutions to measure hand volume blindly in 10 volunteers significantly reduced the mean coefficient of variation for hand volumetry from 0.91% for water to 0.52% for the 3% ethanol solution (P < 0.05) and to 0.46% for the 6% ethanol solution (P < 0.05). The mean standard deviation from all 10 subjects decreased from 4.2 ml for water to 2.3 ml for 3% ethanol solution and 2.1 ml for the 6% solution. These findings document that the accuracy and reproducibility of hand volume measurements are improved by small additions of ethanol, most likely by reducing surface tension of water.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164
Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha
2006-11-01
Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.
Experimental comparison of icing cloud instruments
NASA Technical Reports Server (NTRS)
Olsen, W.; Takeuchi, D. M.; Adams, K.
1983-01-01
Icing cloud instruments were tested in the spray cloud Icing Research Tunnel (IRT) in order to determine their relative accuracy and their limitations over a broad range of conditions. It was found that the average of the readings from each of the liquid water content (LWC) instruments tested agreed closely with each other and with the IRT calibration; but all have a data scatter (+ or - one standard deviation) of about + or - 20 percent. The effect of this + or - 20 percent uncertainty is probably acceptable in aero-penalty and deicer experiments. Existing laser spectrometers proved to be too inaccurate for LWC measurements. The error due to water runoff was the same for all ice accretion LWC instruments. Any given laser spectrometer proved to be highly repeatable in its indications of volume median drop size (DVM), LWC and drop size distribution. However, there was a significant disagreement between different spectrometers of the same model, even after careful standard calibration and data analysis. The scatter about the mean of the DVM data from five Axial Scattering Spectrometer Probes was + or - 20 percent (+ or - one standard deviation) and the average was 20 percent higher than the old IRT calibration. The + or - 20 percent uncertainty in DVM can cause an unacceptable variation in the drag coefficient of an airfoil with ice; however, the variation in a deicer performance test may be acceptable.
Evaluation of measurement uncertainty of glucose in clinical chemistry.
Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y
2007-04-01
The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Li, Wei Bo; Höllriegl, Vera; Roth, Paul; Oeh, Uwe
2006-07-01
Intestinal absorption of strontium (Sr) in thirteen healthy adult German volunteers has been investigated by simultaneous oral and intravenous administration of two stable tracer isotopes, i.e. (84)Sr and (86)Sr. The measured Sr tracer concentration in plasma was analyzed using the convolution integral technique to obtain the intestinal absorption rate. The results showed that the Sr labeled in different foodstuffs was absorbed into the body fluids in a large range of difference. The maximum Sr absorption rates were observed within 60-120 min after administration. The rate of absorption is used to evaluate the intestinal absorption fraction, i.e. the f (1) value for various foodstuffs. The equivalent and effective dose coefficients for ingestion of (90)Sr were calculated using these f (1) values, and they were compared with those recommended by the International Commission on Radiological Protection (ICRP). The geometric and arithmetic means of the f (1) values are 0.38 and 0.45 associated with a geometric standard deviation and a standard deviation of 1.88 and 0.22, respectively. The 90% confidence interval of the f (1) values obtained in the present study ranges from 0.13 to 0.98. Expressed as the ratio of the 95 and 50% percentiles of the estimated probability, the uncertainty for the f (1) value corresponds to a factor of 2.58. The effective dose coefficients of (90)Sr after ingestion are 6.1 x 10(-9) Sv Bq(-1) for an f(1) value of 0.05, 1.0 x 10(-8) Sv Bq(-1) for 0.1, 1.9 x 10(-8) Sv Bq(-1) for 0.2, 2.8 x 10(-8) Sv Bq(-1) for 0.3, 3.6 x 10(-8) Sv Bq(-1) for 0.4, 5.3 x 10(-8) Sv Bq(-1) for 0.6, 7.1 x 10(-8) Sv Bq(-1) for 0.8, and 7.9 x 10(-8) Sv Bq(-1) for 0.9, respectively. Taking the effective dose coefficient of 2.8 x 10(-8) Sv Bq(-1) for an f (1) value of 0.3, which is recommended by the ICRP, as a reference, the effective dose coefficient of (90)Sr after ingestion varies by a factor of 2.8 when the f (1) value changes by a factor of 3, i.e. it decreases from 0.3 to 0.1 or increases from 0.3 to 0.9, respectively.
Fully automated contour detection of the ascending aorta in cardiac 2D phase-contrast MRI.
Codari, Marina; Scarabello, Marco; Secchi, Francesco; Sforza, Chiarella; Baselli, Giuseppe; Sardanelli, Francesco
2018-04-01
In this study we proposed a fully automated method for localizing and segmenting the ascending aortic lumen with phase-contrast magnetic resonance imaging (PC-MRI). Twenty-five phase-contrast series were randomly selected out of a large population dataset of patients whose cardiac MRI examination, performed from September 2008 to October 2013, was unremarkable. The local Ethical Committee approved this retrospective study. The ascending aorta was automatically identified on each phase of the cardiac cycle using a priori knowledge of aortic geometry. The frame that maximized the area, eccentricity, and solidity parameters was chosen for unsupervised initialization. Aortic segmentation was performed on each frame using active contouring without edges techniques. The entire algorithm was developed using Matlab R2016b. To validate the proposed method, the manual segmentation performed by a highly experienced operator was used. Dice similarity coefficient, Bland-Altman analysis, and Pearson's correlation coefficient were used as performance metrics. Comparing automated and manual segmentation of the aortic lumen on 714 images, Bland-Altman analysis showed a bias of -6.68mm 2 , a coefficient of repeatability of 91.22mm 2 , a mean area measurement of 581.40mm 2 , and a reproducibility of 85%. Automated and manual segmentation were highly correlated (R=0.98). The Dice similarity coefficient versus the manual reference standard was 94.6±2.1% (mean±standard deviation). A fully automated and robust method for identification and segmentation of ascending aorta on PC-MRI was developed. Its application on patients with a variety of pathologic conditions is advisable. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schramm, G.; Maus, J.; Hofheinz, F.; Petr, J.; Lougovski, A.; Beuthien-Baumann, B.; Platzek, I.; van den Hoff, J.
2014-06-01
The aim of this paper is to describe a new automatic method for compensation of metal-implant-induced segmentation errors in MR-based attenuation maps (MRMaps) and to evaluate the quantitative influence of those artifacts on the reconstructed PET activity concentration. The developed method uses a PET-based delineation of the patient contour to compensate metal-implant-caused signal voids in the MR scan that is segmented for PET attenuation correction. PET emission data of 13 patients with metal implants examined in a Philips Ingenuity PET/MR were reconstructed with the vendor-provided method for attenuation correction (MRMaporig, PETorig) and additionally with a method for attenuation correction (MRMapcor, PETcor) developed by our group. MRMaps produced by both methods were visually inspected for segmentation errors. The segmentation errors in MRMaporig were classified into four classes (L1 and L2 artifacts inside the lung and B1 and B2 artifacts inside the remaining body depending on the assigned attenuation coefficients). The average relative SUV differences (\\varepsilon _{rel}^{av}) between PETorig and PETcor of all regions showing wrong attenuation coefficients in MRMaporig were calculated. Additionally, relative SUVmean differences (ɛrel) of tracer accumulations in hot focal structures inside or in the vicinity of these regions were evaluated. MRMaporig showed erroneous attenuation coefficients inside the regions affected by metal artifacts and inside the patients' lung in all 13 cases. In MRMapcor, all regions with metal artifacts, except for the sternum, were filled with the soft-tissue attenuation coefficient and the lung was correctly segmented in all patients. MRMapcor only showed small residual segmentation errors in eight patients. \\varepsilon _{rel}^{av} (mean ± standard deviation) were: ( - 56 ± 3)% for B1, ( - 43 ± 4)% for B2, (21 ± 18)% for L1, (120 ± 47)% for L2 regions. ɛrel (mean ± standard deviation) of hot focal structures were: ( - 52 ± 12)% in B1, ( - 45 ± 13)% in B2, (19 ± 19)% in L1, (51 ± 31)% in L2 regions. Consequently, metal-implant-induced artifacts severely disturb MR-based attenuation correction and SUV quantification in PET/MR. The developed algorithm is able to compensate for these artifacts and improves SUV quantification accuracy distinctly.
Palta, Mari; Chen, Han-Yang; Kaplan, Robert M.; Feeny, David; Cherepanov, Dasha; Fryback, Dennis
2011-01-01
Background Standard errors of measurement (SEMs) of health related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics and provides guidance on using indexes on the individual and group level. SEM is also a component of reliability. Purpose To estimate SEM of five HRQoL indexes. Design The National Health Measurement Study (NHMS) was a population based telephone survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures 1 and 6 months post cataract surgery. Subjects 3844 randomly selected adults from the non-institutionalized population 35 to 89 years old in the contiguous United States and 265 cataract patients. Measurements The SF6-36v2™, QWB-SA, EQ-5D, HUI2 and HUI3 were included. An item-response theory (IRT) approach captured joint variation in indexes into a composite construct of health (theta). We estimated: (1) the test-retest standard deviation (SEM-TR) from COMHS, (2) the structural standard deviation (SEM-S) around the composite construct from NHMS and (3) corresponding reliability coefficients. Results SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2) and 0.134 (HUI3), while SEM-S was 0.071, 0.094, 0.084, 0.074 and 0.117, respectively. These translate into reliability coefficients for SF-6D: 0.66 (COMHS) and 0.71 (NHMS), for QWB: 0.59 and 0.64, for EQ-5D: 0.61 and 0.70 for HUI2: 0.64 and 0.80, and for HUI3: 0.75 and 0.77, respectively. The SEM varied considerably across levels of health, especially for HUI2, HUI3 and EQ-5D, and was strongly influenced by ceiling effects. Limitations Repeated measures were five months apart and estimated theta contain measurement error. Conclusions The two types of SEM are similar and substantial for all the indexes, and vary across the range of health. PMID:20935280
Comparing Standard Deviation Effects across Contexts
ERIC Educational Resources Information Center
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.
2017-01-01
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-11-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-09-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
MR-Consistent Simultaneous Reconstruction of Attenuation and Activity for Non-TOF PET/MR
NASA Astrophysics Data System (ADS)
Heußer, Thorsten; Rank, Christopher M.; Freitag, Martin T.; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Beyer, Thomas; Kachelrieß, Marc
2016-10-01
Attenuation correction (AC) is required for accurate quantification of the reconstructed activity distribution in positron emission tomography (PET). For simultaneous PET/magnetic resonance (MR), however, AC is challenging, since the MR images do not provide direct information on the attenuating properties of the underlying tissue. Standard MR-based AC does not account for the presence of bone and thus leads to an underestimation of the activity distribution. To improve quantification for non-time-of-flight PET/MR, we propose an algorithm which simultaneously reconstructs activity and attenuation distribution from the PET emission data using available MR images as anatomical prior information. The MR information is used to derive voxel-dependent expectations on the attenuation coefficients. The expectations are modeled using Gaussian-like probability functions. An iterative reconstruction scheme incorporating the prior information on the attenuation coefficients is used to update attenuation and activity distribution in an alternating manner. We tested and evaluated the proposed algorithm for simulated 3D PET data of the head and the pelvis region. Activity deviations were below 5% in soft tissue and lesions compared to the ground truth whereas standard MR-based AC resulted in activity underestimation values of up to 12%.
Pernik, Meribeth
1987-01-01
The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)
Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation
NASA Astrophysics Data System (ADS)
Lychak, Oleh V.; Holyns'kiy, Ivan S.
2016-03-01
The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.
Estimation of spectral distribution of sky radiance using a commercial digital camera.
Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao
2016-01-10
Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.
NASA Astrophysics Data System (ADS)
Cabral, TS; da Silva, CNM; Potiens, MPA; Soares, CMA; Silveira, RR; Khoury, H.; Saito, V.; Fernandes, E.; Cardoso, WF; de Oliveira, HPS; Pires, MA; de Amorim, AS; Balthar, M.
2018-03-01
The results of the comparison involving 9 laboratories in Brazil are reported. The measured quantity was the air kerma in 137Cs and 60Co, at the level of radioprotection. The comparison was conducted by the National Laboratory Metrology of Ionizing Radiation (LNMRI/IRD) from October 2016 to March 2017. The largest deviation between the calibration coefficients was 0.8% for 137Cs and 0.7% for 60Co. This proficiency exercise proved the technical capacity of the Brazilian calibration network in radiation monitors and the results were used by some in the implementation of the standard ISO/IEC 17025.
Determination of the rate coefficient for the N2/+/ + O reaction in the ionosphere
NASA Technical Reports Server (NTRS)
Torr, D. G.; Torr, M. R.; Orsini, N.; Hanson, W. B.; Hoffman, J. H.; Walker, J. C. G.
1977-01-01
Using approximately 400 simultaneous measurements of ion and neutral densities and temperatures, and the spectrum of the solar flux measured by the Atmosphere Explorer C satellite, we have determined the rate constant k1 for the reaction between N2(+) and O in the ionosphere for ion temperatures between 600 and 700 K. We find that k1 = 1.1 x 10 to the minus 10th power cu cm per sec, with a standard deviation of + or - 15%. If we use the temperature dependence for this reaction determined in the laboratory then at 300 K we find excellent agreement with the recommended laboratory value.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-01-01
The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.
Erratum: Sloan Magnitudes for the Brightest Stars
NASA Astrophysics Data System (ADS)
Mallama, A.
2018-06-01
In the article "Sloan Magnitudes for the Brightest Stars" (JAAVSO, 2014, 42, 443), Equation 3 in section A.1. of the Appendix is incorrect; the coefficient of ((R-I) - C1) should be 0.935, rather than 0.953. The mean differences between the new and old results are 0.00 in all cases, and the standard deviations are all 0.00 or 0.01, which is less than the photometric uncertainties of the Johnson or Sloan values. A revised version of the catalog has been published at https://arxiv.org/abs/1805.09324. The revision is proposed as a bright star extension to the APASS database.
NASA Technical Reports Server (NTRS)
Przybyszewski, J.
1972-01-01
Computer-processed data from low-speed (10 rpm) slipring experiments with two similar (but of opposite polarity) gallium-lubricated tantalum slipring assemblies (hemisphere against disk) carrying 50 amperes dc in vacuum (10 to the minus 9th power torr) showed that the slipring assembly with the anodic hemisphere had significantly lower peak-to-peak values and standard deviations of coefficient-of-friction samples (a measure of smoothness of operation) than the slipring assembly with the cathodic hemisphere. Similar data from an experiment with the same slipring assemblies running currentless showed more random differences in the frictional behavior between the two assemblies.
NASA Astrophysics Data System (ADS)
Weingart, Robert
This thesis is about the validation of a computational fluid dynamics simulation of a ground vehicle by means of a low-budget coast-down test. The vehicle is built to the standards of the 2014 Formula SAE rules. It is equipped with large wings in the front and rear of the car; the vertical loads on the tires are measured by specifically calibrated shock potentiometers. The coast-down test was performed on a runway of a local airport and is used to determine vehicle specific coefficients such as drag, downforce, aerodynamic balance, and rolling resistance for different aerodynamic setups. The test results are then compared to the respective simulated results. The drag deviates about 5% from the simulated to the measured results. The downforce numbers show a deviation up to 18% respectively. Moreover, a sensitivity analysis of inlet velocities, ride heights, and pitch angles was performed with the help of the computational simulation.
Toward unbiased estimations of the statefinder parameters
NASA Astrophysics Data System (ADS)
Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando
2017-09-01
With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.
Air- and N2-Broadening Coefficients and Pressure-Shift Coefficients in the C-12(O2-16) Laser Bands
NASA Technical Reports Server (NTRS)
Devi, V. Malathy; Benner, D. Chris; Smith, Mary Ann H.; Rinsland, Curtis P.
1998-01-01
In this paper we report the pressure broadening and the pressure-induced line shift coefficients for 46 individual rovibrational lines in both the (12)C(16)O2, 00(sup 0)1-(10(sup 0)0-02(sup 0)0)I, and 00(sup 0)1-(10(sup 0)0-02(sup 0)0)II, laser bands (laser band I centered at 960.959/cm and laser band II centered at 1063.735/cm) determined from spectra recorded with the McMath-Pierce Fourier transform spectrometer. The results were obtained from analysis of 10 long-path laboratory absorption spectra recorded at room temperature using a multispectrum nonlinear least-squares technique. Pressure effects caused by both air and nitrogen have been investigated. The air-broadening coefficients determined in this study agree well with the values in the 1996 HITRAN database; ratios and standard deviations of the ratios of the present air-broadening measurements to the 1996 HITRAN values for the two laser bands are: 1.005(15) for laser band I and 1.005(14) for laser band II. Broadening by nitrogen is 3 to 4% larger than that of air. The pressure-induced line shift coefficients are found to be transition dependent and different for the P- and R-branch lines with same J" value. No noticeable differences in the shift coefficients caused by air and nitrogen were found. The results obtained are compared with available values previously reported in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abazov, Victor Mukhamedovich
Here, we present a measurement of the correlation between the spins of t and tbar quarks produced in proton-antiproton collisions at the Tevatron Collider at a center-of-mass energy of 1.96 TeV. We apply a matrix element technique to dilepton and single-lepton+jets final states in data accumulated with the D0 detector that correspond to an integrated luminosity of 9.7 fbmore » $$^{-1}$$. The measured value of the correlation coefficient in the off-diagonal basis, $$O_{off} = 0.89 \\pm 0.22$$ (stat + syst), is in agreement with the standard model prediction, and represents evidence for a top-antitop quark spin correlation difference from zero at a level of 4.2 standard deviations.« less
Sirisomboon, Panmanas; Chowbankrang, Rawiphan; Williams, Phil
2012-05-01
Near-infrared spectroscopy in diffuse reflection mode was used to evaluate the apparent viscosity of Para rubber field latex and concentrated latex over the wavelength range of 1100 to 2500 nm, using partial least square regression (PLSR). The model with ten principal components (PCs) developed using the raw spectra accurately predicted the apparent viscosity with correlation coefficient (r), standard error of prediction (SEP), and bias of 0.974, 8.6 cP, and -0.4 cP, respectively. The ratio of the SEP to the standard deviation (RPD) and the ratio of the SEP to the range (RER) for the prediction were 4.4 and 16.7, respectively. Therefore, the model can be used for measurement of the apparent viscosity of field latex and concentrated latex in quality assurance and process control in the factory.
Abazov, Victor Mukhamedovich
2016-03-25
Here, we present a measurement of the correlation between the spins of t and tbar quarks produced in proton-antiproton collisions at the Tevatron Collider at a center-of-mass energy of 1.96 TeV. We apply a matrix element technique to dilepton and single-lepton+jets final states in data accumulated with the D0 detector that correspond to an integrated luminosity of 9.7 fbmore » $$^{-1}$$. The measured value of the correlation coefficient in the off-diagonal basis, $$O_{off} = 0.89 \\pm 0.22$$ (stat + syst), is in agreement with the standard model prediction, and represents evidence for a top-antitop quark spin correlation difference from zero at a level of 4.2 standard deviations.« less
Kolobe, Thubi H A; Bulanda, Michelle; Susman, Louisa
2004-12-01
Accurate and diagnostic measures are central to early identification and intervention with infants who are at risk for developmental delays or disabilities. The purpose of this study was to examine (1) the ability of infants' Test of Infant Motor Performance (TIMP) scores at 7, 30, 60 and 90 days after term age to predict motor development at preschool age and (2) the contribution of the home environment and medical risk to the prediction. Sixty-one children from an original cohort of 90 infants who were assessed weekly with the TIMP, between 34 weeks gestational age and 4 months after term age, participated in this follow-up study. The Peabody Developmental Motor Scales, 2nd edition (PDMS-2), were administered to the children at the mean age of 57 months (SD=4.8 months). The quality and quantity of the home environment also were assessed at this age using the Early Childhood Home Observation for Measurement of the Environment (EC-HOME). Pearson product moment correlation coefficients, multiple regression, sensitivity and specificity, and positive and negative predictive values were used to assess the relationship among the TIMP, HOME, medical risk, and PDMS-2 scores. The correlation coefficients between the TIMP and PDMS-2 scores were statistically significant for all ages except at 7 days. The highest correlation coefficient was at 90 days (r=.69, P=.001). The TIMP scores at 30, 60, and 90 days after term; medical risk scores; and EC-HOME scores explained 24%, 23%, and 52% of the variance in the PDMS-2 scores, respectively. The TIMP score at 90 days after term was the most significant contributor to the prediction. The TIMP cutoff score of -0.5 standard deviation below the mean correctly classified 80%, 79%, and 87% of the children using a cutoff score of -2 standard deviations on the PDMS-2 at 30, 60, and 90 days, respectively. The results compare favorably with those of developmental tests administered to infants at 6 months of age or older. These findings underscore the need for age-specific test values and developmental surveillance of infants before making referrals.
Kirov, Ivan I; George, Ilena C; Jayawickrama, Nikhil; Babb, James S; Perry, Nissa N; Gonen, Oded
2012-01-01
The longitudinal repeatability of proton MR spectroscopy ((1) H-MRS) in the healthy human brain at high fields over long periods is not established. Therefore, we assessed the inter- and intra-subject repeatability of (1) H-MRS in an approach suited for diffuse pathologies in 10 individuals, at 3T, annually for 3 years. Spectra from 480 voxels over 360 cm(3) (∼30%) of the brain, were individually phased, frequency-aligned, and summed into one average spectrum. This dramatically increases metabolites' signal-to-noise-ratios while maintaining narrow linewidths that improve quantification precision. The resulting concentrations of the N-acetylaspartate, creatine, choline, and myo-inositol are: 8.9 ± 0.8, 5.9 ± 0.6, 1.4 ± 0.1, and 4.5 ± 0.5 mM (mean ± standard-deviation). the inter-subject coefficients of variation are 8.7%, 10.2%, 10.7%, and 11.8%; and the longitudinal (intra-subject) coefficients of variation are lower still: 6.6%, 6.8%, 6.8%, and 10%, much better than the 35%, 44%, 55%, and 62% intra-voxel coefficients of variation. The biological and nonbiological components of the summed spectra coefficients of variation had similar contributions to the overall variance. Copyright © 2011 Wiley-Liss, Inc.
Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping
2015-10-20
Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.
Exploring Students' Conceptions of the Standard Deviation
ERIC Educational Resources Information Center
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2012 CFR
2012-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2014 CFR
2014-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2011 CFR
2011-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2013 CFR
2013-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2010 CFR
2010-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.6 - Tolerances for moisture meters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...
Lack of transferability between two automated immunoassays for serum IGF-I measurement.
Gomez-Gomez, Carolina; Iglesias, Eva M; Barallat, Jaume; Moreno, Fernando; Biosca, Carme; Pastor, Mari-Cruz; Granada, Maria-Luisa
2014-01-01
IGF-I is a clinically relevant protein in the diagnosis and monitoring of treatment of growth disor- ders. The Growth Hormone Research Society and the International IGF Research Society have encouraged the adoption of a universal calibration for immunoassays to improve standardization of IGF-I measurements, but currently commercial assays are calibrated either against the old WHO IRR 87/518 or the new WHO 02/254. We compared two IGF-I immunochemiluminescent assays: IMMULITE® 2000 (Siemens) and LIAISON® (DiaSorin), which differ in their standardization, and verified their precision according to quality specifications based on biological variation and their linear range. 62 patient serum samples were analyzed for both assays and compared according to standards of the Clinical and Laboratory Standards Institute (CLSI), EP9-A2-IR. Precision was verified according to CLSI EP15- A2. Optimal coefficient of variation (CVo) and desirable coefficient of variation (CVd) for IGF-I assays were calculated as quality specifications based on the biological variability, in order to assess if the interassay analytical CV (CVa1) in the two methods were appropriate. Two dilution series using the 1st WHO International Standard (WHO IS) for IGF-I 02/254 were used to verify and compare the linearity range. The regression analysis showed constant and proportional differences for serum samples (slope b = 0.8115 (CI 95% CI; 0.7575-0.8556); intercept a = 33.6873 (95% CI: 23.3613-44.0133) between assays and similar pro- portional differences for WHO IS 02/254 standard dilutions series (slope b = 0.8024 (CI 95% CI; 0.7560-0.8616); intercept a = 6.9623 (95% CI: -2.0819-18.4383) between assays. Within-laboratory coefficients of variation for low and high levels were 2.82% and 3.80% for IMMULITE® 2000 and 3.58% and 2.14% for LIAISON®, respecttively. IGF-I concentrations measured by both assays are not transferable. The results emphasize the need to express IGF-I concentrations in standard deviation score (SDS) according to a matched normal population of the same age and gender. Within-laboratory precision in both methods met quality specifications derived from biological variation.
Matsudaira, Ko; Oka, Hiroyuki; Kikuchi, Norimasa; Haga, Yuri; Sawada, Takayuki; Tanaka, Sakae
2016-01-01
The STarT Back Tool uses prognostic indicators to classify patients with low back pain into three risk groups to guide early secondary prevention in primary care. The present study aimed to evaluate the psychometric properties of the Japanese version of the tool (STarT-J). An online survey was conducted among Japanese patients with low back pain aged 20-64 years. Reliability was assessed by examining the internal consistency of the overall and psychosocial subscales using Cronbach's alpha coefficients. Spearman's correlation coefficients were used to evaluate the concurrent validity between the STarT-J total score/psychosocial subscore and standard reference questionnaires. Discriminant validity was evaluated by calculating the area under the curves (AUCs) for the total and psychosocial subscale scores against standard reference cases. Known-groups validity was assessed by examining the relationship between low back pain-related disability and STarT-J scores. The analysis included data for 2000 Japanese patients with low back pain; the mean (standard deviation [SD]) age was 47.7 (9.3) years, and 54.1% were male. The mean (SD) STarT-J score was 2.2 (2.1). The Cronbach's alpha coefficient was 0.75 for the overall scale and 0.66 for the psychosocial subscale. Spearman's correlation coefficients ranged from 0.30 to 0.59, demonstrating moderate to strong concurrent validity. The AUCs for the total score ranged from 0.65 to 0.83, mostly demonstrating acceptable discriminative ability. For known-groups validity, participants with more somatic symptoms had higher total scores. Those in higher STarT-J risk groups had experienced more low back pain-related absences. The overall STarT-J scale was internally consistent and had acceptable concurrent, discriminant, and known-groups validity. The STarT-J can be used with Japanese patients with low back pain.
Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton
2015-09-01
Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Exploring extended scalar sectors with di-Higgs signals: a Higgs EFT perspective
NASA Astrophysics Data System (ADS)
Corbett, Tyler; Joglekar, Aniket; Li, Hao-Lin; Yu, Jiang-Hao
2018-05-01
We consider extended scalar sectors of the Standard Model as ultraviolet complete motivations for studying the effective Higgs self-interaction operators of the Standard Model effective field theory. We investigate all motivated heavy scalar models which generate the dimension-six effective operator, | H|6, at tree level and proceed to identify the full set of tree-level dimension-six operators by integrating out the heavy scalars. Of seven models which generate | H|6 at tree level only two, quadruplets of hypercharge Y = 3 Y H and Y = Y H , generate only this operator. Next we perform global fits to constrain relevant Wilson coefficients from the LHC single Higgs measurements as well as the electroweak oblique parameters S and T. We find that the T parameter puts very strong constraints on the Wilson coefficient of the | H|6 operator in the triplet and quadruplet models, while the singlet and doublet models could still have Higgs self-couplings which deviate significantly from the standard model prediction. To determine the extent to which the | H|6 operator could be constrained, we study the di-Higgs signatures at the future 100 TeV collider and explore future sensitivity of this operator. Projected onto the Higgs potential parameters of the extended scalar sectors, with 30 ab-1 luminosity data we will be able to explore the Higgs potential parameters in all seven models.
Using 3 Tesla magnetic resonance imaging in the pre-operative evaluation of tongue carcinoma.
Moreno, K F; Cornelius, R S; Lucas, F V; Meinzen-Derr, J; Patil, Y J
2017-09-01
This study aimed to evaluate the role of 3 Tesla magnetic resonance imaging in predicting tongue tumour thickness via direct and reconstructed measures, and their correlations with corresponding histological measures, nodal metastasis and extracapsular spread. A prospective study was conducted of 25 patients with histologically proven squamous cell carcinoma of the tongue and pre-operative 3 Tesla magnetic resonance imaging from 2009 to 2012. Correlations between 3 Tesla magnetic resonance imaging and histological measures of tongue tumour thickness were assessed using the Pearson correlation coefficient: r values were 0.84 (p < 0.0001) and 0.81 (p < 0.0001) for direct and reconstructed measurements, respectively. For magnetic resonance imaging, direct measures of tumour thickness (mean ± standard deviation, 18.2 ± 7.3 mm) did not significantly differ from the reconstructed measures (mean ± standard deviation, 17.9 ± 7.2 mm; r = 0.879). Moreover, 3 Tesla magnetic resonance imaging had 83 per cent sensitivity, 82 per cent specificity, 82 per cent accuracy and a 90 per cent negative predictive value for detecting cervical lymph node metastasis. In this cohort, 3 Tesla magnetic resonance imaging measures of tumour thickness correlated highly with the corresponding histological measures. Further, 3 Tesla magnetic resonance imaging was an effective method of detecting malignant adenopathy with extracapsular spread.
An Overview of Interrater Agreement on Likert Scales for Researchers and Practitioners
O'Neill, Thomas A.
2017-01-01
Applications of interrater agreement (IRA) statistics for Likert scales are plentiful in research and practice. IRA may be implicated in job analysis, performance appraisal, panel interviews, and any other approach to gathering systematic observations. Any rating system involving subject-matter experts can also benefit from IRA as a measure of consensus. Further, IRA is fundamental to aggregation in multilevel research, which is becoming increasingly common in order to address nesting. Although, several technical descriptions of a few specific IRA statistics exist, this paper aims to provide a tractable orientation to common IRA indices to support application. The introductory overview is written with the intent of facilitating contrasts among IRA statistics by critically reviewing equations, interpretations, strengths, and weaknesses. Statistics considered include rwg, rwg*, r′wg, rwg(p), average deviation (AD), awg, standard deviation (Swg), and the coefficient of variation (CVwg). Equations support quick calculation and contrasting of different agreement indices. The article also includes a “quick reference” table and three figures in order to help readers identify how IRA statistics differ and how interpretations of IRA will depend strongly on the statistic employed. A brief consideration of recommended practices involving statistical and practical cutoff standards is presented, and conclusions are offered in light of the current literature. PMID:28553257
Ginat, Daniel T; Mangla, Rajiv; Yeaney, Gabrielle; Schaefer, Pamela W; Wang, Henry
2012-08-01
To determine whether there is a correlation between vascular endothelial growth factor (VEGF) expression and cerebral blood flow (CBV) measurements in dynamic contrast-enhanced susceptibility perfusion magnetic resonance imaging (MRI) and to correlate the perfusion characteristics in high- versus low-grade meningiomas. A total of 48 (24 high-grade and 24 low-grade) meningiomas with available dynamic susceptibility-weighted MRI were retrospectively reviewed for maximum CBV and semiquantitative VEGF immunoreactivity. Correlation between normalized CBV and VEGF was made using the Spearman rank test and comparison between CBV in high- versus low-grade meningiomas was made using the Wilcoxon test. There was a significant (P = .01) correlation between normalized maximum CBV and VEGF scores with a Spearman correlation coefficient of 0.37. In addition, there was a significant (P < .01) difference in normalized maximum CBV ratios between high-grade meningiomas (mean 12.6; standard deviation 5.2) and low-grade meningiomas (mean 8.2; standard deviation 5.2). The data suggest that CBV accurately reflects VEGF expression and tumor grade in meningiomas. Perfusion-weighted MRI can potentially serve as a useful biomarker for meningiomas, pending prospective studies. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.
Alves, Vera; Gonçalves, João; Conceição, Carlota; Teixeira, Helena M; Câmara, José S
2015-08-21
A powerful and sensitive method, by microextraction packed sorbent (MEPS), and ultra-high performance liquid chromatography (UHPLC) with a photodiode array (PDA) detection, is described for the determination of fluoxetine, clomipramine and their active metabolites in human urine samples. The MEPS variables, such as sample volume, pH, number of extraction cycles (draw-eject), and desorption conditions (solvent and solvent volume of elution) were optimized. The analysis were carried out using small sample volumes (500μL) and in a short time period (5min for the entire sample preparation step). Good linearity was obtained for all antidepressants with the correlation coefficients (R(2)) above 0.9965. The limits of detection (LOD) ranged from 0.068 to 0.087μgmL(-1). The recoveries were from 93% to 98%, with relative standard deviations less than 6%. The inter-day precision, expressed as the relative standard deviation, varied between 3.8% and 8.5% while the intra-day precision between 3.0% and 7.1%. In order to evaluate the proposed method for clinical use, the MEPS/UHPLC-PDA method was applied to analysis of urine samples from depressed patients. Copyright © 2015 Elsevier B.V. All rights reserved.
Yang, Pan; Peng, Yulan; Zhao, Haina; Luo, Honghao; Jin, Ya; He, Yushuang
2015-01-01
Static shear wave elastography (SWE) is used to detect breast lesions, but slice and plane selections result in discrepancies. To evaluate the intraobserver reproducibility of continuous SWE, and whether quantitative elasticities in orthogonal planes perform better in the differential diagnosis of breast lesions. One hundred and twenty-two breast lesions scheduled for ultrasound-guided biopsy were recruited. Continuous SWE scans were conducted in orthogonal planes separately. Quantitative elasticities and histopathology results were collected. Reproducibility in the same plane and diagnostic performance in different planes were evaluated. The maximum and mean elasticities of the hardest portion, and standard deviation of whole lesion, had high inter-class correlation coefficients (0.87 to 0.95) and large areas under receiver operation characteristic curve (0.887 to 0.899). Without loss of accuracy, sensitivities had increased in orthogonal planes compared with single plane (from 73.17% up to 82.93% at most). Mean elasticity of whole lesion and lesion-to-parenchyma ratio were significantly less reproducible and less accurate. Continuous SWE is highly reproducible for the same observer. The maximum and mean elasticities of the hardest portion and standard deviation of whole lesion are most reliable. Furthermore, the sensitivities of the three parameters are improved in orthogonal planes without loss of accuracies.
Nakanishi, Masaki; Wang, Yu-Te; Jung, Tzyy-Ping; Zao, John K; Chien, Yu-Yi; Diniz-Filho, Alberto; Daga, Fabio B; Lin, Yuan-Pin; Wang, Yijun; Medeiros, Felipe A
2017-06-01
The current assessment of visual field loss in diseases such as glaucoma is affected by the subjectivity of patient responses and the lack of portability of standard perimeters. To describe the development and initial validation of a portable brain-computer interface (BCI) for objectively assessing visual function loss. This case-control study involved 62 eyes of 33 patients with glaucoma and 30 eyes of 17 healthy participants. Glaucoma was diagnosed based on a masked grading of optic disc stereophotographs. All participants underwent testing with a BCI device and standard automated perimetry (SAP) within 3 months. The BCI device integrates wearable, wireless, dry electroencephalogram and electrooculogram systems and a cellphone-based head-mounted display to enable the detection of multifocal steady state visual-evoked potentials associated with visual field stimulation. The performances of global and sectoral multifocal steady state visual-evoked potentials metrics to discriminate glaucomatous from healthy eyes were compared with global and sectoral SAP parameters. The repeatability of the BCI device measurements was assessed by collecting results of repeated testing in 20 eyes of 10 participants with glaucoma for 3 sessions of measurements separated by weekly intervals. Receiver operating characteristic curves summarizing diagnostic accuracy. Intraclass correlation coefficients and coefficients of variation for assessing repeatability. Among the 33 participants with glaucoma, 19 (58%) were white, 12 (36%) were black, and 2 (6%) were Asian, while among the 17 participants with healthy eyes, 9 (53%) were white, 8 (47%) were black, and none were Asian. The receiver operating characteristic curve area for the global BCI multifocal steady state visual-evoked potentials parameter was 0.92 (95% CI, 0.86-0.96), which was larger than for SAP mean deviation (area under the curve, 0.81; 95% CI, 0.72-0.90), SAP mean sensitivity (area under the curve, 0.80; 95% CI, 0.69-0.88; P = .03), and SAP pattern standard deviation (area under the curve, 0.77; 95% CI, 0.66-0.87; P = .01). No statistically significant differences were seen for the sectoral measurements between the BCI and SAP. Intraclass coefficients for global and sectoral parameters ranged from 0.74 to 0.92, and mean coefficients of variation ranged from 3.03% to 7.45%. The BCI device may be useful for assessing the electrical brain responses associated with visual field stimulation. The device discriminated eyes with glaucomatous neuropathy from healthy eyes in a clinically based setting. Further studies should investigate the feasibility of the BCI device for home-based testing as well as for detecting visual function loss over time.
Davis, William E; Li, Yongtao
2008-07-15
A new isotope dilution gas chromatography/chemical ionization/tandem mass spectrometric method was developed for the analysis of carcinogenic hydrazine in drinking water. The sample preparation was performed by using the optimized derivatization and multiple liquid-liquid extraction techniques. Using the direct aqueous-phase derivatization with acetone, hydrazine and isotopically labeled hydrazine-(15)N2 used as the surrogate standard formed acetone azine and acetone azine-(15)N2, respectively. These derivatives were then extracted with dichloromethane. Prior to analysis using methanol as the chemical ionization reagent gas, the extract was dried with anhydrous sodium sulfate, concentrated through evaporation, and then fortified with isotopically labeled N-nitrosodimethylamine-d6 used as the internal standard to quantify the extracted acetone azine-(15)N2. The extracted acetone azine was quantified against the extracted acetone azine-(15)N2. The isotope dilution standard calibration curve resulted in a linear regression correlation coefficient (R) of 0.999. The obtained method detection limit was 0.70 ng/L for hydrazine in reagent water samples, fortified at a concentration of 1.0 ng/L. For reagent water samples fortified at a concentration of 20.0 ng/L, the mean recoveries were 102% with a relative standard deviation of 13.7% for hydrazine and 106% with a relative standard deviation of 12.5% for hydrazine-(15)N2. Hydrazine at 0.5-2.6 ng/L was detected in 7 out of 13 chloraminated drinking water samples but was not detected in the rest of the chloraminated drinking water samples and the studied chlorinated drinking water sample.
Akata, Takashi; Setoguchi, Hidekazu; Shirozu, Kazuhiro; Yoshino, Jun
2007-06-01
It is essential to estimate the brain temperature of patients during deliberate deep hypothermia. Using jugular bulb temperature as a standard for brain temperature, we evaluated the accuracy and precision of 5 standard temperature monitoring sites (ie, pulmonary artery, nasopharynx, forehead deep-tissue, urinary bladder, and fingertip skin-surface tissue) during deep hypothermic cardiopulmonary bypass conducted for thoracic aortic reconstruction. In 20 adult patients with thoracic aortic aneurysms, the 5 temperature monitoring sites were recorded every 1 minute during deep hypothermic (<20 degrees C) cardiopulmonary bypass. The accuracy was evaluated by the difference from jugular bulb temperature, and the precision was evaluated by its standard deviation, as well as by the correlation with jugular bulb temperature. Pulmonary artery temperature and jugular bulb temperature began to change immediately after the start of cooling or rewarming, closely matching each other, and the other temperatures lagged behind these two temperatures. During either situation, the accuracy of pulmonary artery temperature measurement (0.3 degrees C-0.5 degrees C) was much superior to the other measurements, and its precision (standard deviation of the difference from jugular bulb temperature = 1.5 degrees C-1.8 degrees C; correlation coefficient = 0.94-0.95) was also best among the measurements, with its rank order being pulmonary artery > or = nasopharynx > forehead > bladder > fingertip. However, the accuracy and precision of pulmonary artery temperature measurement was significantly impaired during and for several minutes after infusion of cold cardioplegic solution. Pulmonary artery temperature measurement is recommended to estimate brain temperature during deep hypothermic cardiopulmonary bypass, even if it is conducted with the sternum opened; however, caution needs to be exercised in interpreting its measurements during periods of the cardioplegic solution infusion.
Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P
2017-08-14
The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .
Conkle, Joel; Ramakrishnan, Usha; Flores-Ayala, Rafael; Suchdev, Parminder S; Martorell, Reynaldo
2017-01-01
Anthropometric data collected in clinics and surveys are often inaccurate and unreliable due to measurement error. The Body Imaging for Nutritional Assessment Study (BINA) evaluated the ability of 3D imaging to correctly measure stature, head circumference (HC) and arm circumference (MUAC) for children under five years of age. This paper describes the protocol for and the quality of manual anthropometric measurements in BINA, a study conducted in 2016-17 in Atlanta, USA. Quality was evaluated by examining digit preference, biological plausibility of z-scores, z-score standard deviations, and reliability. We calculated z-scores and analyzed plausibility based on the 2006 WHO Child Growth Standards (CGS). For reliability, we calculated intra- and inter-observer Technical Error of Measurement (TEM) and Intraclass Correlation Coefficient (ICC). We found low digit preference; 99.6% of z-scores were biologically plausible, with z-score standard deviations ranging from 0.92 to 1.07. Total TEM was 0.40 for stature, 0.28 for HC, and 0.25 for MUAC in centimeters. ICC ranged from 0.99 to 1.00. The quality of manual measurements in BINA was high and similar to that of the anthropometric data used to develop the WHO CGS. We attributed high quality to vigorous training, motivated and competent field staff, reduction of non-measurement error through the use of technology, and reduction of measurement error through adequate monitoring and supervision. Our anthropometry measurement protocol, which builds on and improves upon the protocol used for the WHO CGS, can be used to improve anthropometric data quality. The discussion illustrates the need to standardize anthropometric data quality assessment, and we conclude that BINA can provide a valuable evaluation of 3D imaging for child anthropometry because there is comparison to gold-standard, manual measurements.
Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.
Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J
2008-06-18
Currently, clustering with some form of correlation coefficient as the gene similarity metric has become a popular method for profiling genomic data. The Pearson correlation coefficient and the standard deviation (SD)-weighted correlation coefficient are the two most widely-used correlations as the similarity metrics in clustering microarray data. However, these two correlations are not optimal for analyzing replicated microarray data generated by most laboratories. An effective correlation coefficient is needed to provide statistically sufficient analysis of replicated microarray data. In this study, we describe a novel correlation coefficient, shrinkage correlation coefficient (SCC), that fully exploits the similarity between the replicated microarray experimental samples. The methodology considers both the number of replicates and the variance within each experimental group in clustering expression data, and provides a robust statistical estimation of the error of replicated microarray data. The value of SCC is revealed by its comparison with two other correlation coefficients that are currently the most widely-used (Pearson correlation coefficient and SD-weighted correlation coefficient) using statistical measures on both synthetic expression data as well as real gene expression data from Saccharomyces cerevisiae. Two leading clustering methods, hierarchical and k-means clustering were applied for the comparison. The comparison indicated that using SCC achieves better clustering performance. Applying SCC-based hierarchical clustering to the replicated microarray data obtained from germinating spores of the fern Ceratopteris richardii, we discovered two clusters of genes with shared expression patterns during spore germination. Functional analysis suggested that some of the genetic mechanisms that control germination in such diverse plant lineages as mosses and angiosperms are also conserved among ferns. This study shows that SCC is an alternative to the Pearson correlation coefficient and the SD-weighted correlation coefficient, and is particularly useful for clustering replicated microarray data. This computational approach should be generally useful for proteomic data or other high-throughput analysis methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitri, F.G., E-mail: F.G.Mitri@ieee.org; Li, R.X., E-mail: rxli@mail.xidian.edu.cn; Collaborative Innovation Center of Information Sensing and Understanding, Xidian University, Xi’an 710071
A complete description of vector Bessel (vortex) beams in the context of the generalized Lorenz–Mie theory (GLMT) for the electromagnetic (EM) resonance scattering by a dielectric sphere is presented, using the method of separation of variables and the subtraction of a non-resonant background (corresponding to a perfectly conducting sphere of the same size) from the standard Mie scattering coefficients. Unlike the conventional results of standard optical radiation, the resonance scattering of a dielectric sphere in air in the field of EM Bessel beams is examined and demonstrated with particular emphasis on the EM field’s polarization and beam order (or topologicalmore » charge). Linear, circular, radial, azimuthal polarizations as well as unpolarized Bessel vortex beams are considered. The conditions required for the resonance scattering are analyzed, stemming from the vectorial description of the EM field using the angular spectrum decomposition, the derivation of the beam-shape coefficients (BSCs) using the integral localized approximation (ILA) and Neumann–Graf’s addition theorem, and the determination of the scattering coefficients of the sphere using Debye series. In contrast with the standard scattering theory, the resonance method presented here allows the quantitative description of the scattering using Debye series by separating diffraction effects from the external and internal reflections from the sphere. Furthermore, the analysis is extended to include rainbow formation in Bessel beams and the derivation of a generalized formula for the deviation angle of high-order rainbows. Potential applications for this analysis include Bessel beam-based laser imaging spectroscopy, atom cooling and quantum optics, electromagnetic instrumentation and profilometry, optical tweezers and tractor beams, to name a few emerging areas of research.« less
Visualizing the Sample Standard Deviation
ERIC Educational Resources Information Center
Sarkar, Jyotirmoy; Rashid, Mamunur
2017-01-01
The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…
Cruise Summary of WHP P6, A10, I3 and I4 Revisits in 2003
NASA Astrophysics Data System (ADS)
Kawano, T.; Uchida, H.; Schneider, W.; Kumamoto, Y.; Nishina, A.; Aoyama, M.; Murata, A.; Sasaki, K.; Yoshikawa, Y.; Watanabe, S.; Fukasawa, M.
2004-12-01
Japan Agency for Marin-Earth Science and Technology (JAMSTEC) conducted a research cruise to round in the southern hemisphere by R/V Mirai. In this presentation, we introduce an outline of the cruise and data quality obtained during the cruise. The cruise started on Aug. 3, 2003 in Brisbane, Australia and sailed eastward until it reached Fremantle, Australia on Feb. 19, 2004. It contained six legs and legs 1, 2, 4 and 5 were revisits of WOCE Hydrographic Program (WHP) sections P6W, P6E, A10 and I3/I4, respectively. The sections consisted of about 500 hydrographic stations in total. On each station, CTD profiles and up to 36 water samples by 12L Niskin-X bottles were taken from the surface to within 10 m of the bottom. Water samples were analyzed at every station for salinity, dissolved oxygen (DO), and nutrients and at alternate stations for concentration of freons, dissolved inorganic carbon (CT), total alkalinity (AT), pH, and so on. Approximately 17,000 samples were obtained for salinity. The standard seawater was measured repeatedly to estimate the uncertainty caused by the setting and stability of the salinometer. The standard deviation of 699 repeated runs of standard seawater was 0.0002 in salinity. Replicate samples, which are a pair of samples drawn from the same Niskin bottle to different sample bottles, were taken to evaluate the overall uncertainty. The standard deviation of absolute differences of 2,769 replicates was also 0.0002 in salinity. For DO, about 13,400 samples were obtained. The analysis was made by a photometric titration technique. The reproducibility estimated from the absolute standard deviation of 1,625 replicates was about 0.09 umol/kg. CTD temperature was calibrated against a deep ocean standards thermometer (SBE35) which was attached to the CTD using a polynomial expression Tcal = T - (a +b*P + c*t), where Tcal is calibrated temperature, T is CTD temperature, P is CTD pressure and t is time. Calibration coefficients, a, b and c, were determined for each station by minimizing the sum of absolute deviation from SBE35 temperature below 2,000dbar. CTD salinity and DO were fitted to values obtained by sampled water analysis using similar polynomials. These corrections yielded deviations of about 0.0002 K in temperature, 0.0003 in salinity and 0.6 umol/kg in DO. Nutrients analyses were accomplished on 16,000 samples using the reference material of nutrients in seawater (RMNS). To establish the traceability and to get higher quality data, 500 bottles of RMNS from the same lot and 150 sets of RMNSs were used. The precisions of phosphate, nitrate and silicate measurements were 0.18 %, 0.17 % and 0.16 % in terms of median of those at 493 stations, respectively. The nutrients concentrations could be expressed with uncertainties explicitly because of the repeated runs of RMNSs. All the analyses for the CO{2}-system parameters in water columns were finished onboard. Analytical precisions of CT, AT and pH were estimated to be \\sim1.0 umol/kg, \\sim2.0 umol/kg, and \\sim7*10-4 pH unit, respectively. Approximately 6,300 samples were obtained for CFC-11 and CFC-12. The concentrations were determined with an electron capture detector - gas chromatograph (ECD-GC) attached the purge and trapping system. The reproducibility estimated from the absolute standard deviation of 365 replicates was less than 1% with respect to the surface concentrations.
Verification of spectrophotometric method for nitrate analysis in water samples
NASA Astrophysics Data System (ADS)
Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu
2017-12-01
The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Wu, Ping-gu; Ma, Bing-jie; Wang, Li-yuan; Shen, Xiang-hong; Zhang, Jing; Tan, Ying; Jiang, Wei
2013-11-01
To establish the method of simultaneous determination of methylcarbamate (MC) and ethylcarbamate (EC) in yellow rice wine by gas chromatography-mass spectrometry (GC/MS). MC and EC in yellow rice wine were derived by 9-xanthydrol, and then the derivants were detected by GC/MS; and quantitatively analyzed by D5-EC isotope internal standard method. The linearity of MC and EC ranged from 2.0 µg/L to 400.0 µg/L, with correlation coefficients at 0.998 and 0.999, respectively. The limits of quantitation (LOQ) and detection (LOD) were 0.67 and 2.0 µg/kg. When MC and EC were added in yellow rice wine at the range of 2.0-300.0 µg/kg, the intraday average recovery rate was 78.8%-102.3%, relative standard deviation was 3.2%-11.6%; interday average recovery rate was 75.4%-101.3%, relative standard deviation was 3.8%-13.4%. 20 samples of yellow rice wine from supermarket were detected using this method, the contents of MC were in the range of ND (no detected) to 1.2 µg/kg, the detection rate was 6% (3/20), the contents of EC in the range of 18.6 µg/kg to 432.3 µg/kg, with the average level at 135.2 µg/kg. The method is simple, rapid and useful for simultaneous determination of MC and EC in yellow rice wine.
Zhang, Xiaotao; Zhang, Li; Ruan, Yibin; Wang, Weiwei; Ji, Houwei; Wan, Qiang; Lin, Fucheng; Liu, Jian
2017-10-08
A method for the simultaneous determination of 15 polycyclic aromatic hydrocarbons in cigarette filter was developed by isotope internal standard combined with gas chromatography-tandem mass spectrometry. The cigarette filters were extracted with dichloromethane, and the extract was filtered with 0.22 μm organic phase membrane. The samples were isolated by DB-5MS column (30 m×0.25 mm, 0.25 μm) and detected using multiple reaction monitoring mode of electron impact source under positive ion mode. The linearities of the 15 polycyclic aromatic hydrocarbons (acenapthylene, acenaphthene, fluorene, phenanthrene, anthracene, fluoranthene, pyrene, ben[ a ]anthracene, chrysene, benzo[ b ]fluoranthene, benzo[ k ]fluoranthene, benzo[ a ]pyrene, dibenzo[ a,h ]anthracene, benzo[ g,h,i ]perylene and indeno[1,2,3- c,d ]pyrene) were good, and the correlation coefficients ( R 2 ) ranged from 0.9914 to 0.9999. The average recoveries of the 15 polycyclic aromatic hydrocarbons were 81.6%-109.6% at low, middle and high spiked levels, and the relative standard deviations were less than 16%, except that the relative standard deviation of fluorene at the low spiked level was 19.2%. The limits of detection of the 15 polycyclic aromatic hydrocarbons were 0.02 to 0.24 ng/filter, and the limits of quantification were 0.04 to 0.80 ng/filter. The method is simple, rapid, accurate, sensitive and reproducible. It is suitable for the quantitative analysis of the 15 polycyclic aromatic hydrocarbons in cigarette filters.
Down-Looking Interferometer Study II, Volume I,
1980-03-01
g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system
40 CFR 61.207 - Radium-226 sampling and measurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...
A Novel Analysis Of The Connection Between Indian Monsoon Rainfall And Solar Activity
NASA Astrophysics Data System (ADS)
Bhattacharyya, S.; Narasimha, R.
2005-12-01
The existence of possible correlations between the solar cycle period as extracted from the yearly means of sunspot numbers and any periodicities that may be present in the Indian monsoon rainfall has been addressed using wavelet analysis. The wavelet transform coefficient maps of sunspot-number time series and those of the homogeneous Indian monsoon rainfall annual time series data reveal striking similarities, especially around the 11-year period. A novel method to analyse and quantify this similarity devising statistical schemes is suggested in this paper. The wavelet transform coefficient maxima at the 11-year period for the sunspot numbers and the monsoon rainfall have each been modelled as a point process in time and a statistical scheme for identifying a trend or dependence between the two processes has been devised. A regression analysis of parameters in these processes reveals a nearly linear trend with small but systematic deviations from the regressed line. Suitable function models for these deviations have been obtained through an unconstrained error minimisation scheme. These models provide an excellent fit to the time series of the given wavelet transform coefficient maxima obtained from actual data. Statistical significance tests on these deviations suggest with 99% confidence that the deviations are sample fluctuations obtained from normal distributions. In fact our earlier studies (see, Bhattacharyya and Narasimha, 2005, Geophys. Res. Lett., Vol. 32, No. 5) revealed that average rainfall is higher during periods of greater solar activity for all cases, at confidence levels varying from 75% to 99%, being 95% or greater in 3 out of 7 of them. Analysis using standard wavelet techniques reveals higher power in the 8--16 y band during the higher solar activity period, in 6 of the 7 rainfall time series, at confidence levels exceeding 99.99%. Furthermore, a comparison between the wavelet cross spectra of solar activity with rainfall and noise (including those simulating the rainfall spectrum and probability distribution) revealed that over the two test-periods respectively of high and low solar activity, the average cross power of the solar activity index with rainfall exceeds that with the noise at z-test confidence levels exceeding 99.99% over period-bands covering the 11.6 y sunspot cycle (see, Bhattacharyya and Narasimha, SORCE 2005 14-16th September, at Durango, Colorado USA). These results provide strong evidence for connections between Indian rainfall and solar activity. The present study reveals in addition the presence of subharmonics of the solar cycle period in the monsoon rainfall time series together with information on their phase relationships.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
McCormick, Matthew M.; Madsen, Ernest L.; Deaner, Meagan E.; Varghese, Tomy
2011-01-01
Absolute backscatter coefficients in tissue-mimicking phantoms were experimentally determined in the 5–50 MHz frequency range using a broadband technique. A focused broadband transducer from a commercial research system, the VisualSonics Vevo 770, was used with two tissue-mimicking phantoms. The phantoms differed regarding the thin layers covering their surfaces to prevent desiccation and regarding glass bead concentrations and diameter distributions. Ultrasound scanning of these phantoms was performed through the thin layer. To avoid signal saturation, the power spectra obtained from the backscattered radio frequency signals were calibrated by using the signal from a liquid planar reflector, a water-brominated hydrocarbon interface with acoustic impedance close to that of water. Experimental values of absolute backscatter coefficients were compared with those predicted by the Faran scattering model over the frequency range 5–50 MHz. The mean percent difference and standard deviation was 54% ± 45% for the phantom with a mean glass bead diameter of 5.40 μm and was 47% ± 28% for the phantom with 5.16 μm mean diameter beads. PMID:21877789
NASA Astrophysics Data System (ADS)
Hamzah, Esah; Ali, Mubarak; Toff, Mohd Radzi Hj. Mohd
In the present study, TiN coatings have been deposited on D2 tool steel substrates by using cathodic arc physical vapor deposition technique. The objective of this research work is to determine the usefulness of TiN coatings in order to improve the micro-Vickers hardness and friction coefficient of TiN coating deposited on D2 tool steel, which is widely used in tooling applications. A Pin-on-Disc test was carried out to study the coefficient of friction versus sliding distance of TiN coating deposited at various substrate biases. The standard deviation parameter during tribo-test result showed that the coating deposited at substrate bias of -75 V was the most stable coating. A significant increase in micro-Vickers hardness was recorded, when substrate bias was reduced from -150 V to zero. Scratch tester was used to compare the critical loads for coatings deposited at different bias voltages and the adhesion achievable was demonstrated with relevance to the various modes, scratch macroscopic analysis, critical load, acoustic emission and penetration depth. A considerable improvement in TiN coatings was observed as a function of various substrate bias voltages.
Hager, Stephen W.
1994-01-01
Particulate matter was collected at Rio Vista, California, in two study periods; the first, from January 3 to May 26, 1983; the second from October 31, 1983 to November 29, 1984. Concentrations of suspended particulate matter were measured gravimetrically on silver membrane filters. The pooled standard deviation on replicated samples was 1.4 mg/L, giving a coefficient of variation of 5.7 percent. Concentrations of particulate carbon and nitrogen were measured during a Perkin-Elmer model 240C elemental analyzer to combust material collected on glass fiber filters. Refrigeration of samples prior to filtration was shown to be a likely influence on precision of duplicate analyses. Median deviations between duplicates for carbon were 5.4 percent during the first study period and 8.9 percent during the second. For nitrogen, median deviations were 4.9 percent and 7.2 percent, respectively. This report presents the data for concentrations of suspended particulate material, the duplicate analyses for particulate carbon and nitrogen, and the volumes of sample filtered for the particulate carbon and nitrogen analyses for both studies. Not all samples collected during the second study have been analyzed for particulate carbon and nitrogen.
Jiang, Wanfeng; Zhang, Ning; Zhang, Fengyan; Yang, Zhao
2017-07-08
A method for the determination of the content of olive oil in olive blend oil by headspace gas chromatography-mass spectrometry (SH-GC/MS) was established. The amount of the sample, the heating temperature, the heating time, the amount of injection, the injection mode and the chromatographic column were optimized. The characteristic compounds of olive oil were found by chemometric method. A sample of 1.0 g was placed in a 20 mL headspace flask, and heated at 180℃ for 2700 s. Then, 1.0 mL headspace gas was taken into the instrument. An HP-88 chromatographic column was used for the separation and the analysis was performed by GC/MS. The results showed that the linear range was 0-100%(olive oil content). The linear correlation coefficient ( r 2 ) was more than 0.995, and the limits of detection were 1.26%-2.13%. The deviations of olive oil contents in the olive blend oil were from -0.65% to 1.02%, with the relative deviations from -1.3% to 6.8% and the relative standard deviations from 1.18% to 4.26% ( n =6). The method is simple, rapid, environment friendly, sensitive and accurate. It is suitable for the determination of the content of olive oil in olive blend oil.
Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.
Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nieroda, Pawel; Zybala, Rafal; Wojciechowski, Krzysztof T.
The aim of the study was to develop a fast and simple method for preparation of polycrystalline Mg{sub 2}Si. For this purpose a Spark Plasma Sintering (SPS) method was used and synthesis conditions were adjusted in such a manner that no excess Mg was required. Materials were synthesized by the direct reaction of Mg and Si raw powders. To determine the phase and chemical composition, the fabricated samples were studied by X-ray diffraction and SEM microscopy coupled with EDX chemical analysis. Thermoelectric properties of samples (thermal conductivity, electrical conductivity and Seebeck coefficient) were measured all over temperature range of 300-650more » K. The analysis by the scanning thermoelectric microprobe (STM) shows that samples have uniform distribution of Seebeck coefficient with mean value of about -405 {mu}VK{sup -1} and standard deviation of 94 {mu}VK{sup -1}. Prepared materials have intrinsic band gap of 0.45 eV and thermal conductivity {lambda}= 7.5 Wm{sup -1}K{sup -1} at room temperature.« less
The Yale-Brown Obsessive Compulsive Scale: A Reliability Generalization Meta-Analysis.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Maria; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-10-01
The Yale-Brown Obsessive Compulsive Scale (Y-BOCS) is the most frequently applied test to assess obsessive compulsive symptoms. We conducted a reliability generalization meta-analysis on the Y-BOCS to estimate the average reliability, examine the variability among the reliability estimates, search for moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the Y-BOCS. We included studies where the Y-BOCS was applied to a sample of adults and reliability estimate was reported. Out of the 11,490 references located, 144 studies met the selection criteria. For the total scale, the mean reliability was 0.866 for coefficients alpha, 0.848 for test-retest correlations, and 0.922 for intraclass correlations. The moderator analyses led to a predictive model where the standard deviation of the total test and the target population (clinical vs. nonclinical) explained 38.6% of the total variability among coefficients alpha. Finally, clinical implications of the results are discussed. © The Author(s) 2014.
Some common indices of group diversity: upper boundaries.
Solanas, Antonio; Selvam, Rejina M; Navarro, José; Leiva, David
2012-12-01
Workgroup diversity can be conceptualized as variety, separation, or disparity. Thus, the proper operationalization of diversity depends on how a diversity dimension has been defined. Analytically, the minimal diversity must be obtained when there are no differences on an attribute among the members of a group, however maximal diversity has a different shape for each conceptualization of diversity. Previous work on diversity indexes indicated maximum values for variety (e.g., Blau's index and Teachman's index), separation (e.g., standard deviation and mean Euclidean distance), and disparity (e.g., coefficient of variation and the Gini coefficient of concentration), although these maximum values are not valid for all group characteristics (i.e., group size and group size parity) and attribute scales (i.e., number of categories). We demonstrate analytically appropriate upper boundaries for conditional diversity determined by some specific group characteristics, avoiding the bias related to absolute diversity. This will allow applied researchers to make better interpretations regarding the relationship between group diversity and group outcomes.
Correlation of Zn2+ content with aflatoxin content of corn.
Failla, L J; Lynn, D; Niehaus, W G
1986-01-01
Forty-nine samples from the 1983 Virginia corn harvest were analyzed for aflatoxin, zinc, copper, iron, and manganese content. Values (mean +/- standard deviation) were as follows: aflatoxin, 117 +/- 360 micrograms/kg; zinc, 22.5 +/- 3.4 mg/kg; copper, 2.27 +/- 0.56 mg/kg; iron, 40.8 +/- 18.7 mg/kg; and manganese, 5.1 +/- 1.1 mg/kg. Aflatoxin levels positively correlated with zinc (Spearman correlation coefficient, 0.385; P less than 0.006) and copper levels (Spearman correlation coefficient, 0.573; P less than 0.0001). Based on biochemical data in the literature, we believe that the correlation with zinc is important and that there may be a cause-and-effect relationship between zinc levels in corn and aflatoxin levels which are produced upon infection with Aspergillus flavus or A. parasiticus. Control of aflatoxin contamination in field corn by decreasing the zinc levels may be feasible, but no methods to decrease zinc levels are currently available. PMID:3729406
Experimental Measurements and Comparison of Cable Performance for Mine Hunting Applications
NASA Astrophysics Data System (ADS)
Mangum, Katherine
2005-11-01
The Naval Surface Warfare Center (NSWCCD) conducted testing of multiple faired synthetic cables in the High Speed Basin in April, 2005. The objective of the test was to determine the hydrodynamic characteristics of bare cables, ribbon faired cables, and cables with extruded plastic ``strakes.'' Faired cables are used to gain on-station time and improve performance of the MH-60 Helicopter when towing mine hunting vehicles. Drag and strum were compared for all cases. Strum was quantified by computing standard deviations of lateral cable acceleration amplitudes. Drag coefficients were calculated using cable tension and angle readings. While the straked cables strummed less than the bare synthetic cable, they did not reduce strum levels as well as ribbon fairing at steep cable angles for speeds of 10, 15, 20 and 25 knots. The drag coefficient of the straked cables was calculated to be higher than that of a bare cable, although further testing is needed to determine an exact number.
Mörschbächer, Ana Paula; Dullius, Anja; Dullius, Carlos Henrique; Bandt, Cassiano Ricardo; Kuhn, Daniel; Brietzke, Débora Tairini; Malmann Kuffel, Fernando José; Etgeton, Henrique Pretto; Altmayer, Taciélen; Gonçalves, Tamara Engelmann; Oreste, Eliézer Quadro; Ribeiro, Anderson Schwingel; de Souza, Claucia Fernanda Volken; Hoehne, Lucélia
2018-07-30
The present paper describes the validation of a spectrophotometry method involving molecular absorption in the visible ultraviolet-visible (UV-Vis) region for selenium (Se) determination in the bacterial biomass produced by lactic acid bacteria (LAB). The method was found to be suitable for the target application and presented a linearity range from 0.025 to 0.250 mg/L Se. The angular and linear coefficients of the linear equation were 1.0678 and 0.0197 mg/L Se, respectively, and the linear correlation coefficient (R 2 ) was 0.9991. Analyte recovery exceeded 96% with a relative standard deviation (RSD) below 3%. The Se contents in LAB ranged from 0.01 to 20 mg/g. The Se contents in the bacterial biomass determined by UV-Vis were not significantly different (p > 0.05) those determined by graphite furnace atomic absorption spectrometry. Thus, Se can be quantified in LAB biomass using this relatively simpler technique. Copyright © 2018 Elsevier Ltd. All rights reserved.
Correlation of track irregularities and vehicle responses based on measured data
NASA Astrophysics Data System (ADS)
Karis, Tomas; Berg, Mats; Stichel, Sebastian; Li, Martin; Thomas, Dirk; Dirks, Babette
2018-06-01
Track geometry quality and dynamic vehicle response are closely related, but do not always correspond with each other in terms of maximum values and standard deviations. This can often be seen to give poor results in analyses with correlation coefficients or regression analysis. Measured data from both the EU project DynoTRAIN and the Swedish Green Train (Gröna Tåget) research programme is used in this paper to evaluate track-vehicle response for three vehicles. A single degree of freedom model is used as an inspiration to divide track-vehicle interaction into three parts, which are analysed in terms of correlation. One part, the vertical axle box acceleration divided by vehicle speed squared (?) and the second spatial derivative of the vertical track irregularities (?), is shown to be the weak link with lower correlation coefficients than the other parts. Future efforts should therefore be directed towards investigating the relation between axle box accelerations and track irregularity second derivatives.
A study of atmospheric diffusion from the LANDSAT imagery. [pollution transport over the ocean
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Viswanadham, Y.; Torsani, J. A.
1981-01-01
LANDSAT multispectral scanner data of the smoke plumes which originated in eastern Cabo Frio, Brazil and crossed over into the Atlantic Ocean, are analyzed to illustrate how high resolution LANDSAT imagery can aid meteorologists in evaluating specific air pollution events. The eleven LANDSAT images selected are for different months and years. The results show that diffusion is governed primarily by water and air temperature differences. With colder water, low level air is very stable and the vertical diffusion is minimal; but water warmer than the air induces vigorous diffusion. The applicability of three empirical methods for determining the horizontal eddy diffusivity coefficient in the Gaussian plume formula was evaluated with the estimated standard deviation of the crosswind distribution of material in the plume from the LANDSAT imagery. The vertical diffusion coefficient in stable conditions is estimated using Weinstock's formulation. These results form a data base for use in the development and validation of meso scale atmospheric diffusion models.
Hazard avoidance via descent images for safe landing
NASA Astrophysics Data System (ADS)
Yan, Ruicheng; Cao, Zhiguo; Zhu, Lei; Fang, Zhiwen
2013-10-01
In planetary or lunar landing missions, hazard avoidance is critical for landing safety. Therefore, it is very important to correctly detect hazards and effectively find a safe landing area during the last stage of descent. In this paper, we propose a passive sensing based HDA (hazard detection and avoidance) approach via descent images to lower the landing risk. In hazard detection stage, a statistical probability model on the basis of the hazard similarity is adopted to evaluate the image and detect hazardous areas, so that a binary hazard image can be generated. Afterwards, a safety coefficient, which jointly utilized the proportion of hazards in the local region and the inside hazard distribution, is proposed to find potential regions with less hazards in the binary hazard image. By using the safety coefficient in a coarse-to-fine procedure and combining it with the local ISD (intensity standard deviation) measure, the safe landing area is determined. The algorithm is evaluated and verified with many simulated descent downward looking images rendered from lunar orbital satellite images.
Johnston, Patrick A; Brown, Robert C
2014-08-13
A rapid method for the quantitation of total sugars in pyrolysis liquids using high-performance liquid chromatography (HPLC) was developed. The method avoids the tedious and time-consuming sample preparation required by current analytical methods. It is possible to directly analyze hydrolyzed pyrolysis liquids, bypassing the neutralization step usually required in determination of total sugars. A comparison with traditional methods was used to determine the validity of the results. The calibration curve coefficient of determination on all standard compounds was >0.999 using a refractive index detector. The relative standard deviation for the new method was 1.13%. The spiked sugar recoveries on the pyrolysis liquid samples were between 104 and 105%. The research demonstrates that it is possible to obtain excellent accuracy and efficiency using HPLC to quantitate glucose after acid hydrolysis of polymeric and oligomeric sugars found in fast pyrolysis bio-oils without neutralization.
Shama, S A
2002-11-07
A simple and rapid spectrophotometric methods have been estimated for the microdetermination of phenylephrine HCl (I) and orphenadrine citrate (II). The proposed methods are based on the formation of ion-pair complexes between the examined drugs with alizarine (Aliz), alizarine red S (ARS), alizarine yellow G (AYG) or quinalizarine (Qaliz), which can be measured at the optimum lambda(max). The optimization of the reaction conditions is investigated. Beer's law is obeyed in the concentration ranges 2-36 microgram ml(-1), whereas optimum concentration as adopted from Ringbom plots was 3.5-33 microgram ml(-1). The molar absorptivity, Sandell sensitivity, and detection limit are also calculated. The correlation coefficient was >/=0.9988 (n=6) with a relative standard deviation of =1.7, for six determinations of 20 microgram ml(-1). The proposed methods are successfully applied to the determination of drugs I and II in their dosage forms using the standard addition technique.
NASA Astrophysics Data System (ADS)
Alamgir, Malik; Khuhawar, Muhammad Yar; Memon, Saima Q.; Hayat, Amir; Zounr, Rizwan Ali
2015-01-01
A sensitive and simple spectrofluorimetric method has been developed for the analysis of famotidine, from pharmaceutical preparations and biological fluids after derivatization with benzoin. The reaction was carried out in alkaline medium with measurement of fluorescence intensity at 446 nm with excitation wavelength at 286 nm. Linear calibration was obtained with 0.5-15 μg/ml with coefficient of determination (r2) 0.997. The factors affecting the fluorescence intensity were optimized. The pharmaceutical additives and amino acid did not interfere in the determination. The mean percentage recovery (n = 4) calculated by standard addition from pharmaceutical preparation was 94.8-98.2% with relative standard deviation (RSD) 1.56-3.34% and recovery from deproteinized spiked serum and urine of healthy volunteers was 98.6-98.9% and 98.0-98.4% with RSD 0.34-0.84% and 0.29-0.87% respectively.
NASA Astrophysics Data System (ADS)
Hanasaki, Itsuo; Kawano, Satoyuki
2013-11-01
Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.
Probability of stress-corrosion fracture under random loading.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.
Angular radiation models for Earth-atmosphere system. Volume 1: Shortwave radiation
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Green, R. N.; Minnis, P.; Smith, G. L.; Staylor, W. F.; Wielicki, B. A.; Walker, I. J.; Young, D. F.; Taylor, V. R.; Stowe, L. L.
1988-01-01
Presented are shortwave angular radiation models which are required for analysis of satellite measurements of Earth radiation, such as those fro the Earth Radiation Budget Experiment (ERBE). The models consist of both bidirectional and directional parameters. The bidirectional parameters are anisotropic function, standard deviation of mean radiance, and shortwave-longwave radiance correlation coefficient. The directional parameters are mean albedo as a function of Sun zenith angle and mean albedo normalized to overhead Sun. Derivation of these models from the Nimbus 7 ERB (Earth Radiation Budget) and Geostationary Operational Environmental Satellite (GOES) data sets is described. Tabulated values and computer-generated plots are included for the bidirectional and directional modes.
NASA Astrophysics Data System (ADS)
Dahire, S. L.; Morey, Y. C.; Agrawal, P. S.
2015-12-01
Density (ρ), viscosity (η), and ultrasonic velocity ( U) of binary mixtures of aliphatic solvents like dimethylformamide (DMF) and dimethylsulfoxide (DMSO) with aromatic solvents viz. chlorobenzene (CB), bromobenzene (BB), and nitrobenzene (NB) have been determined at 313 K. These parameters were used to calculate the adiabatic compressibility (β), intermolecular free length ( L f), molar volume ( V m), and acoustic impedance ( Z). From the experimental data excess molar volume ( V m E ), excess intermolecular free length ( L f E )), excess adiabatic compressibility (βE), and excess acoustic impedance ( Z E) have been computed. The excess values were correlated using Redlich-Kister polynomial equation to obtain their coefficients and standard deviations (σ).
Study of intermolecular interactions in binary mixtures of ethanol in methanol
NASA Astrophysics Data System (ADS)
Maharolkar, Aruna P.; Khirade, P. W.; Murugkar, A. G.
2016-05-01
Present paper deals with study of physicochemical properties like viscosity, density and refractive index for the binary mixtures of ethanol and methanol over the entire concentration range were measured at 298.15 K. The experimental data further used to determine the excess properties viz. excess molar volume, excess viscosity, excess molar refraction. The values of excess properties further fitted with Redlich-Kister (R-K Fit) equation to calculate the binary coefficients and standard deviation. The resulting excess parameters are used to indicate the presence of intermolecular interactions and strength of intermolecular interactions between the molecules in the binary mixtures. Excess parameters indicate structure making factor in the mixture predominates in the system.
Ozcan, Hakki Mevlut; Sagiroglu, Ayten
2010-08-01
In this study the biosensor was constructed by immobilizing tissue homogenate of banana peel onto a glassy carbon electrode surface. Effects of immobilization materials amounts, effects of pH, buffer concentration and temperature on biosensor response were studied. In addition, the detection ranges of 13 phenolic compounds were obtained with the help of the calibration graphs. Storage stability, repeatability of the biosensor, inhibitory effect and sample applications were also investigated. A typical calibration curve for the sensor revealed a linear range of 10-80 microM catechol. In reproducibility studies, variation coefficient and standard deviation were calculated as 2.69%, 1.44 x 10(-3) microM, respectively.
Analysis of titanium content in titanium tetrachloride solution
NASA Astrophysics Data System (ADS)
Bi, Xiaoguo; Dong, Yingnan; Li, Shanshan; Guan, Duojiao; Wang, Jianyu; Tang, Meiling
2018-03-01
Strontium titanate, barium titan and lead titanate are new type of functional ceramic materials with good prospect, and titanium tetrachloride is a commonly in the production such products. Which excellent electrochemical performance of ferroelectric tempreature coefficient effect.In this article, three methods are used to calibrate the samples of titanium tetrachloride solution by back titration method, replacement titration method and gravimetric analysis method. The results show that the back titration method has many good points, for example, relatively simple operation, easy to judgment the titration end point, better accuracy and precision of analytical results, the relative standard deviation not less than 0.2%. So, it is the ideal of conventional analysis methods in the mass production.
Spatial trends in Pearson Type III statistical parameters
Lichty, R.W.; Karlinger, M.R.
1995-01-01
Spatial trends in the statistical parameters (mean, standard deviation, and skewness coefficient) of a Pearson Type III distribution of the logarithms of annual flood peaks for small rural basins (less than 90 km2) are delineated using a climate factor CT, (T=2-, 25-, and 100-yr recurrence intervals), which quantifies the effects of long-term climatic data (rainfall and pan evaporation) on observed T-yr floods. Maps showing trends in average parameter values demonstrate the geographically varying influence of climate on the magnitude of Pearson Type III statistical parameters. The spatial trends in variability of the parameter values characterize the sensitivity of statistical parameters to the interaction of basin-runoff characteristics (hydrology) and climate. -from Authors
Family nonuniversal Z' models with protected flavor-changing interactions
NASA Astrophysics Data System (ADS)
Celis, Alejandro; Fuentes-Martín, Javier; Jung, Martin; Serôdio, Hugo
2015-07-01
We define a new class of Z' models with neutral flavor-changing interactions at tree level in the down-quark sector. They are related in an exact way to elements of the quark mixing matrix due to an underlying flavored U(1)' gauge symmetry, rendering these models particularly predictive. The same symmetry implies lepton-flavor nonuniversal couplings, fully determined by the gauge structure of the model. Our models allow us to address presently observed deviations from the standard model and specific correlations among the new physics contributions to the Wilson coefficients C9,10' ℓ can be tested in b →s ℓ+ℓ- transitions. We furthermore predict lepton-universality violations in Z' decays, testable at the LHC.
Gas-film coefficients for the volatilization of ketones from water
Rathbun, R.E.; Tai, D.Y.
1986-01-01
Volatilization is a significant process in determining the fate of many organic compounds in streams and rivers. Quantifying this process requires knowledge of the mass-transfer coefficient from water, which is a function of the gas-film and liquid-film coefficients. The gas-film coefficient can be determined by measuring the flux for the volatilization of pure organic liquids. Volatilization fluxes for acetone, 2-butanone, 2-pentanone, 3-pentanone, 4-methyl-2-pentanone, 2-heptanone, and 2-octanone were measured in the laboratory over a range of temperatures. Gas-film coefficients were then calculated from these fluxes and from vapor pressure data from the literature. An equation was developed for predicting the volatilization flux of pure liquid ketones as a function of vapor pressure and molecular weight. Large deviations were found for acetone, and these were attributed to the possibility that acetone may be hydrogen bonded. A second equation for predicting the flux as a function of molecular weight and temperature resulted in large deviations for 4methyl-2-pentanone. These deviations were attributed to the branched structure of this ketone. Four factors based on the theory of volatilization and relating the volatilization flux or rate to the vapor pressure, molecular weight, temperature, and molecular diffusion coefficient were not constant as suggested by the literature. The factors generally increased with molecular weight and with temperature. Values for acetone corresponded to ketones with a larger molecular weight, and the acetone factors showed the greatest dependence on temperature. Both of these results are characteristic of compounds that are hydrogen bonded. Relations from the literature commonly used for describing the dependence of the gas-film coefficient on molecular weight and molecular diffusion coefficient were not applicable to the ketone gas-film coefficients. The dependence on molecular weight and molecular diffusion coefficient was in general U-shaped with the largest coefficients observed for acetone, the next largest for 2octanone, and the smallest for 2-pentanone and 3-pentanone. The gas-film coefficient for acetone was much more dependent on temperature than were the coefficients for the other ketones. Such behavior is characteristic of hydrogen-bonded substances. Temperature dependencies of the other ketones were about twice the theoretical value, but were comparable to a literature value for water. Ratios of the ketone gas-film coefficients to the gasfilm coefficients for the evaporation of water were approximately constant for all the ketones except for acetone, whose values were considerably larger. The ratios increased with temperature; however, the increases were small except for acetone. These ratios can be combined with an equation from the literaure for predicting the gasfilm coefficient for evaporation of water from a canal to predict the gas-film coefficients for the volatilization of ketones from streams and rivers.
Patel, Sanjay R.; Weng, Jia; Rueschman, Michael; Dudley, Katherine A.; Loredo, Jose S.; Mossavar-Rahmani, Yasmin; Ramirez, Maricelle; Ramos, Alberto R.; Reid, Kathryn; Seiger, Ashley N.; Sotres-Alvarez, Daniela; Zee, Phyllis C.; Wang, Rui
2015-01-01
Study Objectives: While actigraphy is considered objective, the process of setting rest intervals to calculate sleep variables is subjective. We sought to evaluate the reproducibility of actigraphy-derived measures of sleep using a standardized algorithm for setting rest intervals. Design: Observational study. Setting: Community-based. Participants: A random sample of 50 adults aged 18–64 years free of severe sleep apnea participating in the Sueño sleep ancillary study to the Hispanic Community Health Study/Study of Latinos. Interventions: N/A. Measurements and Results: Participants underwent 7 days of continuous wrist actigraphy and completed daily sleep diaries. Studies were scored twice by each of two scorers. Rest intervals were set using a standardized hierarchical approach based on event marker, diary, light, and activity data. Sleep/wake status was then determined for each 30-sec epoch using a validated algorithm, and this was used to generate 11 variables: mean nightly sleep duration, nap duration, 24-h sleep duration, sleep latency, sleep maintenance efficiency, sleep fragmentation index, sleep onset time, sleep offset time, sleep midpoint time, standard deviation of sleep duration, and standard deviation of sleep midpoint. Intra-scorer intraclass correlation coefficients (ICCs) were high, ranging from 0.911 to 0.995 across all 11 variables. Similarly, inter-scorer ICCs were high, also ranging from 0.911 to 0.995, and mean inter-scorer differences were small. Bland-Altman plots did not reveal any systematic disagreement in scoring. Conclusions: With use of a standardized algorithm to set rest intervals, scoring of actigraphy for the purpose of generating a wide array of sleep variables is highly reproducible. Citation: Patel SR, Weng J, Rueschman M, Dudley KA, Loredo JS, Mossavar-Rahmani Y, Ramirez M, Ramos AR, Reid K, Seiger AN, Sotres-Alvarez D, Zee PC, Wang R. Reproducibility of a standardized actigraphy scoring algorithm for sleep in a US Hispanic/Latino population. SLEEP 2015;38(9):1497–1503. PMID:25845697
Measuring inequality: tools and an illustration.
Williams, Ruth F G; Doessel, D P
2006-05-22
This paper examines an aspect of the problem of measuring inequality in health services. The measures that are commonly applied can be misleading because such measures obscure the difficulty in obtaining a complete ranking of distributions. The nature of the social welfare function underlying these measures is important. The overall object is to demonstrate that varying implications for the welfare of society result from inequality measures. Various tools for measuring a distribution are applied to some illustrative data on four distributions about mental health services. Although these data refer to this one aspect of health, the exercise is of broader relevance than mental health. The summary measures of dispersion conventionally used in empirical work are applied to the data here, such as the standard deviation, the coefficient of variation, the relative mean deviation and the Gini coefficient. Other, less commonly used measures also are applied, such as Theil's Index of Entropy, Atkinson's Measure (using two differing assumptions about the inequality aversion parameter). Lorenz curves are also drawn for these distributions. Distributions are shown to have differing rankings (in terms of which is more equal than another), depending on which measure is applied. The scope and content of the literature from the past decade about health inequalities and inequities suggest that the economic literature from the past 100 years about inequality and inequity may have been overlooked, generally speaking, in the health inequalities and inequity literature. An understanding of economic theory and economic method, partly introduced in this article, is helpful in analysing health inequality and inequity.
Luis, Patricia; Wouters, Christine; Van der Bruggen, Bart; Sandler, Stanley I
2013-08-09
Head-space gas chromatography (HS-GC) is an applicable method to perform vapor-liquid equilibrium measurements and determine activity coefficients. However, the reproducibility of the data may be conditioned by the experimental procedure concerning to the automated pressure-balanced system. The study developed in this work shows that a minimum volume of liquid in the vial is necessary to ensure the reliability of the activity coefficients since it may become a parameter that influences the magnitude of the peak areas: the helium introduced during the pressurization step may produce significant variations of the results when too small volume of liquid is selected. The minimum volume required should thus be evaluated prior to obtain experimentally the concentration in the vapor phase and the activity coefficients. In this work, the mixture acetonitrile-toluene is taken as example, requiring a sample volume of more than 5mL (about more than 25% of the vial volume). The vapor-liquid equilibrium and activity coefficients of mixtures at different concentrations (0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 molar fraction) and four temperatures (35, 45, 55 and 70°C) have been determined. Relative standard deviations (RSD) lower than 5% have been obtained, indicating the good reproducibility of the method when a sample volume larger than 5mL is used. Finally, a general procedure to measure activity coefficients by means of pressure-balanced head-space gas chromatography is proposed. Copyright © 2013 Elsevier B.V. All rights reserved.
An infrared-visible image fusion scheme based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2015-05-01
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
Berenbrock, Charles
2003-01-01
Improved flood-frequency estimates for short-term (10 or fewer years of record) streamflow-gaging stations were needed to support instream flow studies by the U.S. Forest Service, which are focused on quantifying water rights necessary to maintain or restore productive fish habitat. Because peak-flow data for short-term gaging stations can be biased by having been collected during an unusually wet, dry, or otherwise unrepresentative period of record, the data may not represent the full range of potential floods at a site. To test whether peak-flow estimates for short-term gaging stations could be improved, the two-station comparison method was used to adjust the logarithmic mean and logarithmic standard deviation of peak flows for seven short-term gaging stations in the Salmon and Clearwater River Basins, central Idaho. Correlation coefficients determined from regression of peak flows for paired short-term and long-term (more than 10 years of record) gaging stations over a concurrent period of record indicated that the mean and standard deviation of peak flows for all short-term gaging stations would be improved. Flood-frequency estimates for seven short-term gaging stations were determined using the adjusted mean and standard deviation. The original (unadjusted) flood-frequency estimates for three of the seven short-term gaging stations differed from the adjusted estimates by less than 10 percent, probably because the data were collected during periods representing the full range of peak flows. Unadjusted flood-frequency estimates for four short-term gaging stations differed from the adjusted estimates by more than 10 percent; unadjusted estimates for Little Slate Creek and Salmon River near Obsidian differed from adjusted estimates by nearly 30 percent. These large differences probably are attributable to unrepresentative periods of peak-flow data collection.
Prentice, J C; Pizer, S D; Conlin, P R
2016-12-01
To characterize the relationship between HbA 1c variability and adverse health outcomes among US military veterans with Type 2 diabetes. This retrospective cohort study used Veterans Affairs and Medicare claims for veterans with Type 2 diabetes taking metformin who initiated a second diabetes medication (n = 50 861). The main exposure of interest was HbA 1c variability during a 3-year baseline period. HbA 1c variability, categorized into quartiles, was defined as standard deviation, coefficient of variation and adjusted standard deviation, which accounted for the number and mean number of days between HbA 1c tests. Cox proportional hazard models predicted mortality, hospitalization for ambulatory care-sensitive conditions, and myocardial infarction or stroke and were controlled for mean HbA 1c levels and the direction of change in HbA 1c levels during the baseline period. Over a mean 3.3 years of follow-up, all HbA 1c variability measures significantly predicted each outcome. Using the adjusted standard deviation measure for HbA 1c variability, the hazard ratios for the third and fourth quartile predicting mortality were 1.14 (95% CI 1.04, 1.25) and 1.42 (95% CI 1.28, 1.58), for myocardial infarction and stroke they were 1.25 (95% CI 1.10, 1.41) and 1.23 (95% CI 1.07, 1.42) and for ambulatory-care sensitive condition hospitalization they were 1.10 (95% CI 1.03, 1.18) and 1.11 (95% CI 1.03, 1.20). Higher baseline HbA 1c levels independently predicted the likelihood of each outcome. In veterans with Type 2 diabetes, greater HbA 1c variability was associated with an increased risk of adverse long-term outcomes, independently of HbA 1c levels and direction of change. Limiting HbA 1c fluctuations over time may reduce complications. © 2016 Diabetes UK.
Conrads, P.A.; Smith, P.A.
1996-01-01
The one-dimensional, unsteady-flow model, BRANCH, and the Branched Lagrangian Transport Model (BLTM) were calibrated and validated for the Cooper and Wando Rivers near Charleston, South Carolina. Data used to calibrate the BRANCH model included water-level data at four locations on the Cooper River and two locations on the Wando River, measured tidal-cycle streamflows at five locations on the Wando River, and simulated tidal-cycle streamflows (using an existing validated BRANCH model of the Cooper River) for four locations on the Cooper River. The BRANCH model was used to generate the necessary hydraulic data used in the BLTM model. The BLTM model was calibrated and validated using time series of salinity concentrations at two locations on the Cooper River and at two locations on the Wando River. Successful calibration and validation of the BRANCH and BLTM models to water levels, stream flows, and salinity were achieved after applying a positive 0.45 foot datum correction to the downstream boundary. The sensitivity of the simulated salinity concentrations to changes in the downstream gage datum, channel geometry, and roughness coefficient in the BRANCH model, and to the dispersion factor in the BLTM model was evaluated. The simulated salinity concentrations were most sensitive to changes in the downstream gage datum. A decrease of 0.5 feet in the downstream gage datum increased the simulated 3-day mean salinity concentration by 107 percent (12.7 to 26.3 parts per thousand). The range of the salinity concentration went from a tidal oscillation with a standard deviation of 3.9 parts per thousand to a nearly constant concentration with a standard deviation of 0.0 parts per thousand. An increase in the downstream gage datum decreased the simulated 3-day mean salinity concentration by 47 percent (12.7 to 6.7 parts per thousand) and decreased the standard deviation from 3.9 to 3.4 parts per thousand.
Torabizadeh, Mahsa; Talebpour, Zahra; Adib, Nuoshin; Aboul-Enein, Hassan Y
2016-04-01
A new monolithic coating based on vinylpyrrolidone-ethylene glycol dimethacrylate polymer was introduced for stir bar sorptive extraction. The polymerization step was performed using different contents of monomer, cross-linker and porogenic solvent, and the best formulation was selected. The quality of the prepared vinylpyrrolidone-ethylene glycol dimethacrylate stir bars was satisfactory, demonstrating good repeatability within batch (relative standard deviation < 3.5%) and acceptable reproducibility between batches (relative standard deviation < 6.0%). The prepared stir bar was utilized in combination with ultrasound-assisted liquid desorption, followed by high-performance liquid chromatography with ultraviolet detection for the simultaneous determination of diazepam and nordazepam in human plasma samples. To optimize the extraction step, a three-level, four-factor, three-block Box-Behnken design was applied. Under the optimum conditions, the analytical performance of the proposed method displayed excellent linear dynamic ranges for diazepam (36-1200 ng/mL) and nordazepam (25-1200 ng/mL), with correlation coefficients of 0.9986 and 0.9968 and detection limits of 12 and 10 ng/mL, respectively. The intra- and interday recovery ranged from 93 to 106%, and the relative standard deviations were less than 6%. Finally, the proposed method was successfully applied to the analysis of diazepam and nordazepam at their therapeutic levels in human plasma. The novelty of this study is the improved polarity of the stir bar coating and its application for the simultaneous extraction of diazepam and its active metabolite, nordazepam in human plasma sample. The method was more rapid than previously reported stir bar sorptive extraction techniques based on monolithic coatings, and exhibited lower detection limits in comparison with similar methods for the determination of diazepam and nordazepam in biological fluids. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spalding, Steven J; Kwoh, C Kent; Boudreau, Robert; Enama, Joseph; Lunich, Julie; Huber, Daniel; Denes, Louis; Hirsch, Raphael
2008-01-01
Introduction The assessment of joints with active arthritis is a core component of widely used outcome measures. However, substantial variability exists within and across examiners in assessment of these active joint counts. Swelling and temperature changes, two qualities estimated during active joint counts, are amenable to quantification using noncontact digital imaging technologies. We sought to explore the ability of three dimensional (3D) and thermal imaging to reliably measure joint shape and temperature. Methods A Minolta 910 Vivid non-contact 3D laser scanner and a Meditherm med2000 Pro Infrared camera were used to create digital representations of wrist and metacarpalphalangeal (MCP) joints. Specialized software generated 3 quantitative measures for each joint region: 1) Volume; 2) Surface Distribution Index (SDI), a marker of joint shape representing the standard deviation of vertical distances from points on the skin surface to a fixed reference plane; 3) Heat Distribution Index (HDI), representing the standard error of temperatures. Seven wrists and 6 MCP regions from 5 subjects with arthritis were used to develop and validate 3D image acquisition and processing techniques. HDI values from 18 wrist and 9 MCP regions were obtained from 17 patients with active arthritis and compared to data from 10 wrist and MCP regions from 5 controls. Standard deviation (SD), coefficient of variation (CV), and intraclass correlation coefficients (ICC) were calculated for each quantitative measure to establish their reliability. CVs for volume and SDI were <1.3% and ICCs were greater than 0.99. Results Thermal measures were less reliable than 3D measures. However, significant differences were observed between control and arthritis HDI values. Two case studies of arthritic joints demonstrated quantifiable changes in swelling and temperature corresponding with changes in symptoms and physical exam findings. Conclusion 3D and thermal imaging provide reliable measures of joint volume, shape, and thermal patterns. Further refinement may lead to the use of these technologies to improve the assessment of disease activity in arthritis. PMID:18215307
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Self-Broadening and Self-Shift Coefficients in the Fundamental Band of 12C 16O
NASA Technical Reports Server (NTRS)
Devi, Malathy V.; Benner, D. Chris; Smith, Mary Ann H.; Rinsland, Curtis P.
1998-01-01
High quality and precise measurements of self-broadened and self-shift coefficients in the fundamental band of C-12O-16 were made using spectra recorded at room temperature with the high-resolution (0.0027 cm(exp -1)) McMath-Pierce Fourier transform spectrometer located at the National Solar Observatory on Kitt Peak, Arizona. The spectral region under investigation (2008-2247 cm(exp -1)) contains the P(31) to R(31) transitions. The data were obtained using a high-purity natural isotopic sample ofcarbon monoxide and two absorption cells with pathlengths of 4.08 and 9.98 cm, respectively. Various pressures of CO were used, ranging between 0.25 and 201.2 Torr. The results were obtained by analyzing five spectra simultaneously, using a multispectrum nonlinear least-squares fitting technique. The self-broadened coefficients ranged from 0.0426(2) cm(exp -1) atm(exp -1) at 296 K to 0.0924(2) cm(exp -1) atm(exp -1) at 296 K, while the pressure-induced shift coefficients varied between -0.0042(3) cm(exp -1) atm(exp -1) at 296 K and +0.0005(l) cm(exp -1) atm(exp -1) at 296 K. The value in parentheses is the estimated uncertainty in units of the last digit. The self-broadened coefficients of lines with same values of m in the P and R branches agree close to within experimental uncertainties while the self-shift coefficients showed considerable variation within and between the two branches. The mean value of the ratios of P branch to R branch self-broadened coefficients was found to be 1.01 with a standard deviation of + or - 0.01. Comparisons of the results with other published data were made.
NASA Astrophysics Data System (ADS)
Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.; Hardelauf, H.
2003-04-01
When the small scale transport is modeled by a Wiener process and the large scale heterogeneity by a random velocity field, the effective coefficients, Deff, can be decomposed as sums between the local coefficient, D, a contribution of the random advection, Dadv, and a contribution of the randomness of the trajectory of plume center of mass, Dcm: Deff=D+Dadv-Dcm. The coefficient Dadv is similar to that introduced by Taylor in 1921, and more recent works associate it with the thermodynamic equilibrium. The ``ergodic hypothesis'' says that over large time intervals Dcm vanishes and the effect of the heterogeneity is described by Dadv=Deff-D. In this work we investigate numerically the long time behavior of the effective coefficients as well as the validity of the ergodic hypothesis. The transport in every realization of the velocity field is modeled with the Global Random Walk Algorithm, which is able to track as many particles as necessary to achieve a statistically reliable simulation of the process. Averages over realizations are further used to estimate mean coefficients and standard deviations. In order to remain in the frame of most of the theoretical approaches, the velocity field was generated in a linear approximation and the logarithm of the hydraulic conductivity was taken to be exponential decaying correlated with variance equal to 0.1. Our results show that even in these idealized conditions, the effective coefficients tend to asymptotic constant values only when the plume travels thousands of correlations lengths (while the first order theories usually predict Fickian behavior after tens of correlations lengths) and that the ergodicity conditions are still far from being met.
Crescenti, Remo A; Bamber, Jeffrey C; Partridge, Mike; Bush, Nigel L; Webb, Steve
2007-11-21
Research on polymer-gel dosimetry has been driven by the need for three-dimensional dosimetry, and because alternative dosimeters are unsatisfactory or too slow for that task. Magnetic resonance tomography is currently the most well-developed technique for determining radiation-induced changes in polymer structure, but quick low-cost alternatives remain of significant interest. In previous work, ultrasound attenuation and speed of sound were found to change as a function of absorbed radiation dose in polymer-gel dosimeters, although the investigations were restricted to one ultrasound frequency. Here, the ultrasound attenuation coefficient mu in one polymer gel (MAGIC) was investigated as a function of radiation dose D and as a function of ultrasonic frequency f in a frequency range relevant for imaging dose distributions. The nonlinearity of the frequency dependence was characterized, fitting a power-law model mu = af(b); the fitting parameters were examined for potential use as additional dose readout parameters. In the observed relationship between the attenuation coefficient and dose, the slopes in a quasi-linear dose range from 0 to 30 Gy were found to vary with the gel batch but lie between 0.0222 and 0.0348 dB cm(-1) Gy(-1) at 2.3 MHz, between 0.0447 and 0.0608 dB cm(-1) Gy(-1) at 4.1 MHz and between 0.0663 and 0.0880 dB cm(-1) Gy(-1) at 6.0 MHz. The mean standard deviation of the slope for all samples and frequencies was 15.8%. The slope was greater at higher frequencies, but so were the intra-batch fluctuations and intra-sample standard deviations. Further investigations are required to overcome the observed variability, which was largely associated with the sample preparation technique, before it can be determined whether any frequency is superior to others in terms of accuracy and precision in dose determination. Nevertheless, lower frequencies will allow measurements through larger samples. The fit parameter a of the frequency dependence, describing the attenuation coefficient at 1 MHz, was found to be dose dependent, which is consistent with our expectations, as polymerization is known to be associated with increased absorption of ultrasound. No significant dose dependence was found for the fit parameter b, which describes the nonlinearity with frequency. This is consistent with the increased absorption being due to the introduction of new relaxation processes with characteristic frequencies similar to those of existing processes. The data presented here will help with optimizing the design of future 3D dose-imaging systems using ultrasound methods.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
A high-fidelity Monte Carlo evaluation of CANDU-6 safety parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Y.; Hartanto, D.
2012-07-01
Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANDU-6 (CANada Deuterium Uranium) reactor have been evaluated by using a modified MCNPX code. For accurate analysis of the parameters, the DBRC (Doppler Broadening Rejection Correction) scheme was implemented in MCNPX in order to account for the thermal motion of the heavy uranium nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted by using the MCNPX and the FTC value is evaluated for several burnup points including the mid-burnupmore » representing a near-equilibrium core. The Doppler effect has been evaluated by using several cross section libraries such as ENDF/B-VI, ENDF/B-VII, JEFF, JENDLE. The PCR value is also evaluated at mid-burnup conditions to characterize safety features of equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, huge number of neutron histories are considered in this work and the standard deviation of the k-inf values is only 0.5{approx}1 pcm. It has been found that the FTC is significantly enhanced by accounting for the Doppler broadening of scattering resonance and the PCR are clearly improved. (authors)« less
Özyol, Pelin; Özyol, Erhan; Karalezli, Aylin
2018-01-01
To examine the effect of a single dose of artificial tear administration on automated visual field (VF) testing in patients with glaucoma and dry eye syndrome. A total of 35 patients with primary open-angle glaucoma experienced in VF testing with symptoms of dry eye were enrolled in this study. At the first visit, standard VF testing was performed. At the second and third visits with an interval of one week, while the left eyes served as control, one drop of artificial tear was administered to each patient's right eye, and then VF testing was performed again. The reliability parameters, VF indices, number of depressed points at probability levels of pattern deviation plots, and test times were compared between visits. No significant difference was observed in any VF testing parameters of control eyes (P>0.05). In artificial tear administered eyes, significant improvement was observed in test duration, mean deviation, and the number of depressed points at probability levels (P˂0.5%, P˂1%, P˂2) of pattern deviation plots (P˂0.05). The post-hoc test revealed that artificial tear administration elicited an improvement in test duration, mean deviation, and the number of depressed points at probability levels (P˂0.5%, P˂1%, P˂2%) of pattern deviation plots from first visit to second and third visits (P˂0.01, for all comparisons). The intraclass correlation coefficient for the three VF test indices was found to be between 0.735 and 0.85 (P<0.001, for all). A single dose of artificial tear administration immediately before VF testing seems to improve test results and decrease test time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
Prediction of soil organic carbon partition coefficients by soil column liquid chromatography.
Guo, Rongbo; Liang, Xinmiao; Chen, Jiping; Wu, Wenzhong; Zhang, Qing; Martens, Dieter; Kettrup, Antonius
2004-04-30
To avoid the limitation of the widely used prediction methods of soil organic carbon partition coefficients (KOC) from hydrophobic parameters, e.g., the n-octanol/water partition coefficients (KOW) and the reversed phase high performance liquid chromatographic (RP-HPLC) retention factors, the soil column liquid chromatographic (SCLC) method was developed for KOC prediction. The real soils were used as the packing materials of RP-HPLC columns, and the correlations between the retention factors of organic compounds on soil columns (ksoil) and KOC measured by batch equilibrium method were studied. Good correlations were achieved between ksoil and KOC for three types of soils with different properties. All the square of the correlation coefficients (R2) of the linear regression between log ksoil and log KOC were higher than 0.89 with standard deviations of less than 0.21. In addition, the prediction of KOC from KOW and the RP-HPLC retention factors on cyanopropyl (CN) stationary phase (kCN) was comparatively evaluated for the three types of soils. The results show that the prediction of KOC from kCN and KOW is only applicable to some specific types of soils. The results obtained in the present study proved that the SCLC method is appropriate for the KOC prediction for different types of soils, however the applicability of using hydrophobic parameters to predict KOC largely depends on the properties of soil concerned.
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Jiang, Ting-Fu; Lv, Zhi-Hua; Wang, Yuan-Hong; Yue, Mei-E
2006-06-01
A new, simple and rapid capillary electrophoresis (CE) method, using hexadimethrine bromide (HDB) as electroosmotic flow (EOF) modifier, was developed for the identification and quantitative determination of four plant hormones, including gibberellin A3 (GA3), indole-3-acetic acid (IAA), alpha-naphthaleneacetic acid (NAA) and 4-chlorophenoxyacetic acid (4-CA). The optimum separation was achieved with 20 mM borate buffer at pH 10.00 containing 0.005% (w/v) of HDB. The applied voltage was -25 kV and the capillary temperature was kept constant at 25 degrees C. Salicylic acid was used as internal standard for quantification. The calibration dependencies exhibited good linearity within the ratios of the concentrations of standard samples and internal standard and the ratios of the peak areas of samples and internal standard. The correlation coefficients were from 0.9952 to 0.9997. The relative standard deviations of migration times and peak areas were < 1.93 and 6.84%, respectively. The effects of buffer pH, the concentration of HDB and the voltage on the resolution were studied systematically. By this method, the contents of plant hormone in biofertilizer were successfully determined within 7 min, with satisfactory repeatability and recovery.
Besemer, Abigail E; Titz, Benjamin; Grudzinski, Joseph J; Weichert, Jamey P; Kuo, John S; Robins, H Ian; Hall, Lance T; Bednarz, Bryan P
2017-07-06
Variations in tumor volume segmentation methods in targeted radionuclide therapy (TRT) may lead to dosimetric uncertainties. This work investigates the impact of PET and MRI threshold-based tumor segmentation on TRT dosimetry in patients with primary and metastatic brain tumors. In this study, PET/CT images of five brain cancer patients were acquired at 6, 24, and 48 h post-injection of 124 I-CLR1404. The tumor volume was segmented using two standardized uptake value (SUV) threshold levels, two tumor-to-background ratio (TBR) threshold levels, and a T1 Gadolinium-enhanced MRI threshold. The dice similarity coefficient (DSC), jaccard similarity coefficient (JSC), and overlap volume (OV) metrics were calculated to compare differences in the MRI and PET contours. The therapeutic 131 I-CLR1404 voxel-level dose distribution was calculated from the 124 I-CLR1404 activity distribution using RAPID, a Geant4 Monte Carlo internal dosimetry platform. The TBR, SUV, and MRI tumor volumes ranged from 2.3-63.9 cc, 0.1-34.7 cc, and 0.4-11.8 cc, respectively. The average ± standard deviation (range) was 0.19 ± 0.13 (0.01-0.51), 0.30 ± 0.17 (0.03-0.67), and 0.75 ± 0.29 (0.05-1.00) for the JSC, DSC, and OV, respectively. The DSC and JSC values were small and the OV values were large for both the MRI-SUV and MRI-TBR combinations because the regions of PET uptake were generally larger than the MRI enhancement. Notable differences in the tumor dose volume histograms were observed for each patient. The mean (standard deviation) 131 I-CLR1404 tumor doses ranged from 0.28-1.75 Gy GBq -1 (0.07-0.37 Gy GBq -1 ). The ratio of maximum-to-minimum mean doses for each patient ranged from 1.4-2.0. The tumor volume and the interpretation of the tumor dose is highly sensitive to the imaging modality, PET enhancement metric, and threshold level used for tumor volume segmentation. The large variations in tumor doses clearly demonstrate the need for standard protocols for multimodality tumor segmentation in TRT dosimetry.
NASA Astrophysics Data System (ADS)
Besemer, Abigail E.; Titz, Benjamin; Grudzinski, Joseph J.; Weichert, Jamey P.; Kuo, John S.; Robins, H. Ian; Hall, Lance T.; Bednarz, Bryan P.
2017-08-01
Variations in tumor volume segmentation methods in targeted radionuclide therapy (TRT) may lead to dosimetric uncertainties. This work investigates the impact of PET and MRI threshold-based tumor segmentation on TRT dosimetry in patients with primary and metastatic brain tumors. In this study, PET/CT images of five brain cancer patients were acquired at 6, 24, and 48 h post-injection of 124I-CLR1404. The tumor volume was segmented using two standardized uptake value (SUV) threshold levels, two tumor-to-background ratio (TBR) threshold levels, and a T1 Gadolinium-enhanced MRI threshold. The dice similarity coefficient (DSC), jaccard similarity coefficient (JSC), and overlap volume (OV) metrics were calculated to compare differences in the MRI and PET contours. The therapeutic 131I-CLR1404 voxel-level dose distribution was calculated from the 124I-CLR1404 activity distribution using RAPID, a Geant4 Monte Carlo internal dosimetry platform. The TBR, SUV, and MRI tumor volumes ranged from 2.3-63.9 cc, 0.1-34.7 cc, and 0.4-11.8 cc, respectively. The average ± standard deviation (range) was 0.19 ± 0.13 (0.01-0.51), 0.30 ± 0.17 (0.03-0.67), and 0.75 ± 0.29 (0.05-1.00) for the JSC, DSC, and OV, respectively. The DSC and JSC values were small and the OV values were large for both the MRI-SUV and MRI-TBR combinations because the regions of PET uptake were generally larger than the MRI enhancement. Notable differences in the tumor dose volume histograms were observed for each patient. The mean (standard deviation) 131I-CLR1404 tumor doses ranged from 0.28-1.75 Gy GBq-1 (0.07-0.37 Gy GBq-1). The ratio of maximum-to-minimum mean doses for each patient ranged from 1.4-2.0. The tumor volume and the interpretation of the tumor dose is highly sensitive to the imaging modality, PET enhancement metric, and threshold level used for tumor volume segmentation. The large variations in tumor doses clearly demonstrate the need for standard protocols for multimodality tumor segmentation in TRT dosimetry.
Scanning laser polarimetry in eyes with exfoliation syndrome.
Dimopoulos, Antonios T; Katsanos, Andreas; Mikropoulos, Dimitrios G; Giannopoulos, Theodoros; Empeslidis, Theodoros; Teus, Miguel A; Holló, Gábor; Konstas, Anastasios G P
2013-01-01
To compare retinal nerve fiber layer thickness (RNFLT) of normotensive eyes with exfoliation syndrome (XFS) and healthy eyes. Sixty-four consecutive individuals with XFS and normal office-time intraocular pressure (IOP) and 72 consecutive healthy controls were prospectively enrolled for a cross-sectional analysis in this hospital-based observational study. The GDx-VCC parameters (temporal-superior-nasal-inferior-temporal [TSNIT] average, superior average, inferior average, TSNIT standard deviation (SD), and nerve fiber indicator [NFI]) were compared between groups. Correlation between various clinical parameters and RNFLT parameters was investigated with Spearman coefficient. The NFI, although within normal limits for both groups, was significantly greater in the XFS group compared to controls: the respective median and interquartile range (IQR) values were 25.1 (22.0-29.0) vs 15.0 (12.0-20.0), p<0.001. In the XFS group, all RNFLT values were significantly lower compared to controls (p<0.001). However, they were all within the normal clinical ranges for both groups: TSNIT average median (IQR): 52.8 (49.7-55.7) vs 56.0 (53.0-59.3) µm; superior average mean (SD): 62.3 (6.7) vs 68.8 (8.2) µm; inferior average mean (SD): 58.0 (7.2) vs 64.8 (7.7) µm, respectively. TSNIT SD was significantly lower in the XFS group, median (IQR): 18.1 (15.4-20.4) vs 21.0 (18.4-23.8), p<0.001. There was no systematic relationship between RNFLT and visual acuity, cup-to-disc ratio, IOP, central corneal thickness, Humphrey mean deviation, and pattern standard deviation in either group. Compared to control eyes, polarimetry-determined RNFLT was lower in XFS eyes with normal IOP. Therefore, close monitoring of RNFLT may facilitate early identification of those XFS eyes that convert to exfoliative glaucoma.
A better norm-referenced grading using the standard deviation criterion.
Chan, Wing-shing
2014-01-01
The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.
Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C
2009-11-01
During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.
de Vasconcellos, Ilmeire Ramos Rosembach; Griep, Rosane Härter; Portela, Luciana; Alves, Márcia Guimarães de Mello; Rotenberg, Lúcia
2016-01-01
ABSTRACT OBJECTIVE To describe the steps in the transcultural adaptation of the scale in the Effort-reward imbalance model to household and family work to the Brazilian context. METHODS We performed the translation, back-translation, and initial psychometric evaluation of the questionnaire that comprised three dimensions: (i) effort (eight items, emphasizing quantitative workload), (ii) reward (11 items that seek to capture the intrinsic value of family and household work, societal esteem, recognition from the spouse/partner, and affection from the children), and (iii) overcommitment (four items related to intrinsic effort). The scale was included in a sectional study conducted with 1,045 nursing workers. A subsample of 222 subjects answered the questionnaire for a second time, seven to 15 days thereafter. The data were collected between October 2012 and May 2013. The internal consistency of the scale was evaluated using Cronbach’s alpha and test-retest reliability analysis, square weighted kappa, prevalence and bias adjusted Kappa, and intraclass correlation coefficient. RESULTS Prevalence and bias-adjusted Kappa (ka) of the scale dimensions ranged from 0.80-0.83 for overcommitment, 0.78-0.90 for effort, and 0.76-0.93 for reward. In most dimensions, the values of minimum and maximum scores, average, standard deviation, and Cronbach’s alpha were similar in test and retest scores. Only on societal esteem subdimension (reward) was there little variation in standard deviation (test score of 2.24 and retest score of 3.36) and in Cronbach’s alpha coefficient (test score of 0.38 and retest score of 0.59). CONCLUSIONS The Brazilian version of the scale was found to have proper reliability indices regarding time stability, which suggests adapting it to be used in population with characteristics that are similar to the one in this study. PMID:27355466
Effects of work-related stress on work ability index among refinery workers
Habibi, Ehsanollah; Dehghan, Habibollah; Safari, Shahram; Mahaki, Behzad; Hassanzadeh, Akbar
2014-01-01
Introduction: Work-related stress is one of the basic problems in industrial also top 10 work-related health problems and it is increasingly implicated in the development a number of problems such as cardiovascular disease, musculoskeletal diseases, early retirement to employees. On the other hand, early retirement to employees from the workplace has increased on the problems of today's industries. Hereof, improving work ability is one of the most effective ways to enhance the ability and preventing disability and early retirement. The aim of This study is determine the relationship between job stress score and work ability index (WAI) at the refinery workers. Materials and Methods: This is a cross-sectional study in which 171 workers from a refinery in isfahan in 2012 who were working in different occupational groups participated. Based on appropriate assignment sampling, 33 office workers, 69 operational workers, and 69 maintenance workers, respectively, were invited to participate in this study. Two questionnaires including work related-stress and WAI were filled in. Finally, the information was analyzed using the SPSS-20 and statistic tests namely, analysis of covariance Kruskal-Wallis test. Pearson correlation coefficient, ANOVA and t-test. Results: Data analysis revealed that 86% and 14% of participants had moderate and severe stress respectively. Average score of stress and standard deviation was 158.7 ± 17.3 that was in extreme stress range. Average score and standard deviation of WAI questionnaire were 37.18 and 3.86 respectively. That placed in a good range. Pearson correlation coefficient showed that WAI score had significant reversed relationship with a score of stress. Conclusion: According to the results, mean stress score among refinery worker was high and one fator that affect work abiity was high stress, hence training on communication skills and safe working environment in order to decreses stress, enhance the work ability of workers. PMID:24741658
Mitsuoka, Motoki; Shinzawa, Hideyuki; Morisawa, Yusuke; Kariyama, Naomi; Higashi, Noboru; Tsuboi, Motohiro; Ozaki, Yukihiro
2011-01-01
Far-ultraviolet (FUV) spectra in the 190-300 nm region were measured for spring water in Awaji-Akashi area, Tamba area and Rokko-Arima area in Hyogo Prefecture, Japan, these areas have quite different geology features. The spectra of the spring water in the Awaji-Akashi area can be divided into two groups: the spring water samples containing large amounts of NO(3)(-) and/or Cl(-), and those containing only small amounts of NO(3)(-) and Cl(-). The former shows a saturated band below 190 nm due to NO(3)(-) and/or Cl(-). These two types of spectra correspond to different lithological areas: sedimentary lithology near the sea shore containing many ions in the seawater and gravitic lithology far from the sea side, in the Awaji-Akashi area. The spring water from the Tamba area, which is far from the sea, contains relatively small amounts of NO(3)(-) and Cl(-); it does not yield a strong band in the region observed. The FUV spectra of three of four kinds of spring water samples in the Arima Hotspring show characteristic spectral patterns. They are quite different from the spectra of the spring water samples of the Rokko area. Calibration models were developed for NO(3)(-), Cl(-), SO(4)(2-), Na(+), and Mg(2+) in the nine kinds of spring water collected in the Awaji-Akashi area, Tamba, and Rokko-Arima area by using univariate analysis of the first derivative spectra and the actual values obtained by ion chromatography. NO(3)(-) yields the best results: correlation coefficient of 0.999 and standard deviation of 0.09 ppm with the wavelength of 212 nm. Cl(-) also gives good results: correlation coefficient of 0.993 and standard deviation of 0.5 ppm with the wavelength of 192 nm.
Rhee, Sun Jung; Hong, Hyun Sook; Kim, Chul-Hee; Lee, Eun Hye; Cha, Jang Gyu; Jeong, Sun Hye
2015-12-01
This study aimed to evaluate the usefulness of Acoustic Structure Quantification (ASQ; Toshiba Medical Systems Corporation, Nasushiobara, Japan) values in the diagnosis of Hashimoto thyroiditis using B-mode sonography and to identify a cutoff ASQ level that differentiates Hashimoto thyroiditis from normal thyroid tissue. A total of 186 thyroid lobes with Hashimoto thyroiditis and normal thyroid glands underwent sonography with ASQ imaging. The quantitative results were reported in an echo amplitude analysis (Cm(2)) histogram with average, mode, ratio, standard deviation, blue mode, and blue average values. Receiver operating characteristic curve analysis was performed to assess the diagnostic ability of the ASQ values in differentiating Hashimoto thyroiditis from normal thyroid tissue. Intraclass correlation coefficients of the ASQ values were obtained between 2 observers. Of the 186 thyroid lobes, 103 (55%) had Hashimoto thyroiditis, and 83 (45%) were normal. There was a significant difference between the ASQ values of Hashimoto thyroiditis glands and those of normal glands (P < .001). The ASQ values in patients with Hashimoto thyroiditis were significantly greater than those in patients with normal thyroid glands. The areas under the receiver operating characteristic curves for the ratio, blue average, average, blue mode, mode, and standard deviation were: 0.936, 0.902, 0.893, 0.855, 0.846, and 0.842, respectively. The ratio cutoff value of 0.27 offered the best diagnostic performance, with sensitivity of 87.38% and specificity of 95.18%. The intraclass correlation coefficients ranged from 0.86 to 0.94, which indicated substantial agreement between the observers. Acoustic Structure Quantification is a useful and promising sonographic method for diagnosing Hashimoto thyroiditis. Not only could it be a helpful tool for quantifying thyroid echogenicity, but it also would be useful for diagnosis of Hashimoto thyroiditis. © 2015 by the American Institute of Ultrasound in Medicine.
Empirical forecast of the quiet time Ionosphere over Europe: a comparative model investigation
NASA Astrophysics Data System (ADS)
Badeke, R.; Borries, C.; Hoque, M. M.; Minkwitz, D.
2016-12-01
The purpose of this work is to find the best empirical model for a reliable 24 hour forecast of the ionospheric Total Electron Content (TEC) over Europe under geomagnetically quiet conditions. It will be used as an improved reference for the description of storm-induced perturbations in the ionosphere. The observational TEC-data were obtained from the International GNSS Service (IGS). Four different forecast model approaches were validated with observational IGS TEC-data: a 27 day median model (27d), a Fourier Analysis (FA) approach, the Neustrelitz TEC global model (NTCM-GL) and NeQuick 2. Two years were investigated depending on the solar activity: 2015 (high activity) and 2008 (low avtivity) The time periods of magnetic storms, which were identified with the Dst index, were excluded from the validation. For both years the two models 27d and FA show better results than NTCM-GL and NeQuick 2. For example for the year 2015 and 15° E / 50° N the difference between the IGS data and the predicted 27d model shows a mean value of 0.413 TEC units (TECU), a standard deviation of 3.307 TECU and a correlation coefficient of 0.921, while NTCM-GL and NeQuick 2 have mean differences of around 2-3 TECU, standard deviations of 4.5-5 TECU and correlation coefficients below 0.85. Since 27d and FA predictions strongly depend on observational data, the results confirm that data driven forecasts perform better than the climatological models NTCM-GL and NeQuick 2. However, the benefits of NTCM-GL and NeQuick 2 are actually the lower data dependency, i.e. they do not lack on precision when observational IGS TEC data are unavailable. Hence a combination of the different models is recommended reacting accordingly to the different data availabilities.
Chuang, Shin-Shin; Wu, Kung-Tai; Lin, Chen-Yang; Lee, Steven; Chen, Gau-Yang; Kuo, Cheng-Deng
2014-08-01
The Poincaré plot of RR intervals (RRI) is obtained by plotting RRIn+1 against RRIn. The Pearson correlation coefficient (ρRRI), slope (SRRI), Y-intercept (YRRI), standard deviation of instantaneous beat-to-beat RRI variability (SD1RR), and standard deviation of continuous long-term RRI variability (SD2RR) can be defined to characterize the plot. Similarly, the Poincaré plot of autocorrelation function (ACF) of RRI can be obtained by plotting ACFk+1 against ACFk. The corresponding Pearson correlation coefficient (ρACF), slope (SACF), Y-intercept (YACF), SD1ACF, and SD2ACF can be defined similarly to characterize the plot. By comparing the indices of Poincaré plots of RRI and ACF between patients with acute myocardial infarction (AMI) and patients with patent coronary artery (PCA), we found that the ρACF and SACF were significantly larger, whereas the RMSSDACF/SDACF and SD1ACF/SD2ACF were significantly smaller in AMI patients. The ρACF and SACF correlated significantly and negatively with normalized high-frequency power (nHFP), and significantly and positively with normalized very low-frequency power (nVLFP) of heart rate variability in both groups of patients. On the contrary, the RMSSDACF/SDACF and SD1ACF/SD2ACF correlated significantly and positively with nHFP, and significantly and negatively with nVLFP and low-/high-frequency power ratio (LHR) in both groups of patients. We concluded that the ρACF, SACF, RMSSDACF/SDACF, and SD1ACF/SD2ACF, among many other indices of ACF Poincaré plot, can be used to differentiate between patients with AMI and patients with PCA, and that the increase in ρACF and SACF and the decrease in RMSSDACF/SDACF and SD1ACF/SD2ACF suggest an increased sympathetic and decreased vagal modulations in both groups of patients.
Effects of work-related stress on work ability index among refinery workers.
Habibi, Ehsanollah; Dehghan, Habibollah; Safari, Shahram; Mahaki, Behzad; Hassanzadeh, Akbar
2014-01-01
Work-related stress is one of the basic problems in industrial also top 10 work-related health problems and it is increasingly implicated in the development a number of problems such as cardiovascular disease, musculoskeletal diseases, early retirement to employees. On the other hand, early retirement to employees from the workplace has increased on the problems of today's industries. Hereof, improving work ability is one of the most effective ways to enhance the ability and preventing disability and early retirement. The aim of This study is determine the relationship between job stress score and work ability index (WAI) at the refinery workers. This is a cross-sectional study in which 171 workers from a refinery in isfahan in 2012 who were working in different occupational groups participated. Based on appropriate assignment sampling, 33 office workers, 69 operational workers, and 69 maintenance workers, respectively, were invited to participate in this study. Two questionnaires including work related-stress and WAI were filled in. Finally, the information was analyzed using the SPSS-20 and statistic tests namely, analysis of covariance Kruskal-Wallis test. Pearson correlation coefficient, ANOVA and t-test. Data analysis revealed that 86% and 14% of participants had moderate and severe stress respectively. Average score of stress and standard deviation was 158.7 ± 17.3 that was in extreme stress range. Average score and standard deviation of WAI questionnaire were 37.18 and 3.86 respectively. That placed in a good range. Pearson correlation coefficient showed that WAI score had significant reversed relationship with a score of stress. According to the results, mean stress score among refinery worker was high and one fator that affect work abiity was high stress, hence training on communication skills and safe working environment in order to decreses stress, enhance the work ability of workers.
Effect of uncertainties on probabilistic-based design capacity of hydrosystems
NASA Astrophysics Data System (ADS)
Tung, Yeou-Koung
2018-02-01
Hydrosystems engineering designs involve analysis of hydrometric data (e.g., rainfall, floods) and use of hydrologic/hydraulic models, all of which contribute various degrees of uncertainty to the design process. Uncertainties in hydrosystem designs can be generally categorized into aleatory and epistemic types. The former arises from the natural randomness of hydrologic processes whereas the latter are due to knowledge deficiency in model formulation and model parameter specification. This study shows that the presence of epistemic uncertainties induces uncertainty in determining the design capacity. Hence, the designer needs to quantify the uncertainty features of design capacity to determine the capacity with a stipulated performance reliability under the design condition. Using detention basin design as an example, the study illustrates a methodological framework by considering aleatory uncertainty from rainfall and epistemic uncertainties from the runoff coefficient, curve number, and sampling error in design rainfall magnitude. The effects of including different items of uncertainty and performance reliability on the design detention capacity are examined. A numerical example shows that the mean value of the design capacity of the detention basin increases with the design return period and this relation is found to be practically the same regardless of the uncertainty types considered. The standard deviation associated with the design capacity, when subject to epistemic uncertainty, increases with both design frequency and items of epistemic uncertainty involved. It is found that the epistemic uncertainty due to sampling error in rainfall quantiles should not be ignored. Even with a sample size of 80 (relatively large for a hydrologic application) the inclusion of sampling error in rainfall quantiles resulted in a standard deviation about 2.5 times higher than that considering only the uncertainty of the runoff coefficient and curve number. Furthermore, the presence of epistemic uncertainties in the design would result in under-estimation of the annual failure probability of the hydrosystem and has a discounting effect on the anticipated design return period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, J.
We present the preliminary measurement of CP-violating asymmetries in B{sup 0} {yields} ({rho}{pi}){sup 0} {yields} {pi}{sup +}{pi}{sup -}{pi}{sup 0} decays using a time-dependent Dalitz plot analysis. The results are obtained from a data sample of 213 million {Upsilon}(4S) {yields} B{bar B} decays, collected by the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC. This analysis extends the narrow-rho quasi-two-body approximation used in the previous analysis, by taking into account the interference between the rho resonances of the three charges. We measure 16 coefficients of the bilinear form factor terms occurring in the time-dependent decay rate of the B{supmore » 0} meson with the use of a maximum-likelihood fit. We derive the physically relevant quantities from these coefficients. We measure the direct CP-violation parameters A{sub {rho}{pi}} = -0.088 {+-} 0.049 {+-} 0.013 and C = 0.34 {+-} 0.11 {+-} 0.05, where the first errors are statistical and the second systematic. For the mixing-induced CP-violation parameter we find S = -0.10 {+-} 0.14 {+-} 0.04, and for the dilution and strong phase shift parameters respectively, we obtain {Delta}C = 0.15 {+-} 0.11 {+-} 0.03 and {Delta}S = 0.22 {+-} 0.15 {+-} 0.03. For the angle alpha of the Unitarity Triangle we measure (113{sub -17}{sup +27} {+-} 6){sup o}, while only a weak constraint is achieved at the significance level of more than two standard deviations. Finally, for the relative strong phase {delta}{sub {+-}} between the B{sup 0} {yields} {rho}{sup -}{pi}{sup +} and B{sup 0} {yields} {rho}{sup +}{pi}{sup -} transitions we find (-67{sub -31}{sup +28} {+-} 7) deg, with a similarly weak constraint at two standard deviations and beyond.« less
Scalar-tensor theories and modified gravity in the wake of GW170817
NASA Astrophysics Data System (ADS)
Langlois, David; Saito, Ryo; Yamauchi, Daisuke; Noui, Karim
2018-03-01
Theories of dark energy and modified gravity can be strongly constrained by astrophysical or cosmological observations, as illustrated by the recent observation of the gravitational wave event GW170817 and of its electromagnetic counterpart GRB 170817A, which showed that the speed of gravitational waves, cg , is the same as the speed of light, within deviations of order 10-15 . This observation implies severe restrictions on scalar-tensor theories, in particular theories whose action depends on second derivatives of a scalar field. Working in the very general framework of degenerate higher-order scalar-tensor (DHOST) theories, which encompass Horndeski and beyond Horndeski theories, we present the DHOST theories that satisfy cg=c . We then examine, for these theories, the screening mechanism that suppresses scalar interactions on small scales, namely the Vainshtein mechanism, and compute the corresponding gravitational laws for a nonrelativistic spherical body. We show that it can lead to a deviation from standard gravity inside matter, parametrized by three coefficients which satisfy a consistency relation and can be constrained by present and future astrophysical observations.
Demonstration of the Gore Module for Passive Ground Water Sampling
2014-06-01
ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70
Stochastic uncertainty analysis for unconfined flow systems
Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming
2006-01-01
A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.
Liu, Yuan; Chen, Wei-Hua; Hou, Qiao-Juan; Wang, Xi-Chang; Dong, Ruo-Yan; Wu, Hao
2014-04-01
Near infrared spectroscopy (NIR) was used in this experiment to evaluate the freshness of ice-stored large yellow croaker (Pseudosciaena crocea) during different storage periods. And the TVB-N was used as an index to evaluate the freshness. Through comparing the correlation coefficent and standard deviations of calibration set and validation set of models established by singly and combined using of different pretreatment methods, different modeling methods and different wavelength region, the best TVB-N models of ice-stored large yellow croaker sold in the market were established to predict the freshness quickly. According to the research, the model shows that the best performance could be established by using the normalization by closure (Ncl) with 1st derivative (Dbl) and normalization to unit length (Nle) with 1st derivative as the pretreated method and partial least square (PLS) as the modeling method combined with choosing the wavelength region of 5 000-7 144, and 7 404-10 000 cm(-1). The calibration model gave the correlation coefficient of 0.992, with a standard error of calibration of 1.045 and the validation model gave the correlation coefficient of 0.999, with a standard error of prediction of 0.990. This experiment attempted to combine several pretreatment methods and choose the best wavelength region, which has got a good result. It could have a good prospective application of freshness detection and quality evaluation of large yellow croaker in the market.
Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Flexner 3.0—Democratization of Medical Knowledge for the 21st Century
Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762
NASA Astrophysics Data System (ADS)
Akbarnejad, Shahin; Saffari Pour, Mohsen; Jonsson, Lage Tord Ingemar; Jönsson, Pӓr Göran
2017-02-01
Ceramic foam filters (CFFs) are used to remove solid particles and inclusions from molten metal. In general, molten metal which is poured on the top of a CFF needs to reach a certain height to build the required pressure (metal head) to prime the filter. To estimate the required metal head, it is necessary to obtain permeability coefficients using permeametry experiments. It has been mentioned in the literature that to avoid fluid bypassing, during permeametry, samples need to be sealed. However, the effect of fluid bypassing on the experimentally obtained pressure gradients seems not to be explored. Therefore, in this research, the focus was on studying the effect of fluid bypassing on the experimentally obtained pressure gradients as well as the empirically obtained Darcy and non-Darcy permeability coefficients. Specifically, the aim of the research was to investigate the effect of fluid bypassing on the liquid permeability of 30, 50, and 80 pores per inch (PPI) commercial alumina CFFs. In addition, the experimental data were compared to the numerically modeled findings. Both studies showed that no sealing results in extremely poor estimates of the pressure gradients and Darcy and non-Darcy permeability coefficients for all studied filters. The average deviations between the pressure gradients of the sealed and unsealed 30, 50, and 80 PPI samples were calculated to be 57.2, 56.8, and 61.3 pct. The deviations between the Darcy coefficients of the sealed and unsealed 30, 50, and 80 PPI samples found to be 9, 20, and 31 pct. The deviations between the non-Darcy coefficients of the sealed and unsealed 30, 50, and 80 PPI samples were calculated to be 59, 58, and 63 pct.
Estimation of the neural drive to the muscle from surface electromyograms
NASA Astrophysics Data System (ADS)
Hofmann, David
Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.
Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.
Bowman, Richard G; Caraway, David; Bentley, Ishmael
2013-01-01
Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.
Validity of a Self-Report Recall Tool for Estimating Sedentary Behavior in Adults.
Gomersall, Sjaan R; Pavey, Toby G; Clark, Bronwyn K; Jasman, Adib; Brown, Wendy J
2015-11-01
Sedentary behavior is continuing to emerge as an important target for health promotion. The purpose of this study was to determine the validity of a self-report use of time recall tool, the Multimedia Activity Recall for Children and Adults (MARCA) in estimating time spent sitting/lying, compared with a device-based measure. Fifty-eight participants (48% female, [mean ± standard deviation] 28 ± 7.4 years of age, 23.9 ± 3.05 kg/m(2)) wore an activPAL device for 24-h and the following day completed the MARCA. Pearson correlation coefficients (r) were used to analyze convergent validity of the adult MARCA compared with activPAL estimates of total sitting/lying time. Agreement was examined using Bland-Altman plots. According to activPAL estimates, participants spent 10.4 hr/day [standard deviation (SD) = 2.06] sitting or lying down while awake. The correlation between MARCA and activPAL estimates of total sit/lie time was r = .77 (95% confidence interval = 0.64-0.86; P < .001). Bland-Altman analyses revealed a mean bias of +0.59 hr/day with moderately wide limits of agreement (-2.35 hr to +3.53 hr/day). This study found a moderate to strong agreement between the adult MARCA and the activPAL, suggesting that the MARCA is an appropriate tool for the measurement of time spent sitting or lying down in an adult population.
Gao, Chen-chen; Li, Feng-min; Lu, Lun; Sun, Yue
2015-10-01
For the determination of trace amounts of phthalic acid esters (PAEs) in complex seawater matrix, a stir bar sorptive extraction gas chromatography mass spectrometry (SBSE-GC-MS) method was established. Dimethyl phthalate (DMP), diethyl phthalate (DEP), dibutyl phthalate (DBP), butyl benzyl phthalate (BBP), dibutyl phthalate (2-ethylhexyl) phthalate (DEHP) and dioctyl phthalate (DOP) were selected as study objects. The effects of extraction time, amount of methanol, amount of sodium chloride, desorption time and desorption solvent were optimized. The method of SBSE-GC-MS was validated through recoveries and relative standard deviation. The optimal extraction time was 2 h. The optimal methanol content was 10%. The optimal sodium chloride content was 5% . The optimal desorption time was 50 min. The optimal desorption solvent was the mixture of methanol to acetonitrile (4:1, volume: volume). The linear relationship between the peak area and the concentration of PAEs was relevant. The correlation coefficients were greater than 0.997. The detection limits were between 0.25 and 174.42 ng x L(-1). The recoveries of different concentrations were between 56.97% and 124.22% . The relative standard deviations were between 0.41% and 14.39%. On the basis of the method, several estuaries water sample of Jiaozhou Bay were detected. DEP was detected in all samples, and the concentration of BBP, DEHP and DOP were much higher than the rest.
Spectral relative standard deviation: a practical benchmark in metabolomics.
Parsons, Helen M; Ekman, Drew R; Collette, Timothy W; Viant, Mark R
2009-03-01
Metabolomics datasets, by definition, comprise of measurements of large numbers of metabolites. Both technical (analytical) and biological factors will induce variation within these measurements that is not consistent across all metabolites. Consequently, criteria are required to assess the reproducibility of metabolomics datasets that are derived from all the detected metabolites. Here we calculate spectrum-wide relative standard deviations (RSDs; also termed coefficient of variation, CV) for ten metabolomics datasets, spanning a variety of sample types from mammals, fish, invertebrates and a cell line, and display them succinctly as boxplots. We demonstrate multiple applications of spectral RSDs for characterising technical as well as inter-individual biological variation: for optimising metabolite extractions, comparing analytical techniques, investigating matrix effects, and comparing biofluids and tissue extracts from single and multiple species for optimising experimental design. Technical variation within metabolomics datasets, recorded using one- and two-dimensional NMR and mass spectrometry, ranges from 1.6 to 20.6% (reported as the median spectral RSD). Inter-individual biological variation is typically larger, ranging from as low as 7.2% for tissue extracts from laboratory-housed rats to 58.4% for fish plasma. In addition, for some of the datasets we confirm that the spectral RSD values are largely invariant across different spectral processing methods, such as baseline correction, normalisation and binning resolution. In conclusion, we propose spectral RSDs and their median values contained herein as practical benchmarks for metabolomics studies.
Comparison of different functional EIT approaches to quantify tidal ventilation distribution.
Zhao, Zhanqi; Yun, Po-Jen; Kuo, Yen-Liang; Fu, Feng; Dai, Meng; Frerichs, Inez; Möller, Knut
2018-01-30
The aim of the study was to examine the pros and cons of different types of functional EIT (fEIT) to quantify tidal ventilation distribution in a clinical setting. fEIT images were calculated with (1) standard deviation of pixel time curve, (2) regression coefficients of global and local impedance time curves, or (3) mean tidal variations. To characterize temporal heterogeneity of tidal ventilation distribution, another fEIT image of pixel inspiration times is also proposed. fEIT-regression is very robust to signals with different phase information. When the respiratory signal should be distinguished from the heart-beat related signal, or during high-frequency oscillatory ventilation, fEIT-regression is superior to other types. fEIT-tidal variation is the most stable image type regarding the baseline shift. We recommend using this type of fEIT image for preliminary evaluation of the acquired EIT data. However, all these fEITs would be misleading in their assessment of ventilation distribution in the presence of temporal heterogeneity. The analysis software provided by the currently available commercial EIT equipment only offers either fEIT of standard deviation or tidal variation. Considering the pros and cons of each fEIT type, we recommend embedding more types into the analysis software to allow the physicians dealing with more complex clinical applications with on-line EIT measurements.
Portable device for the detection of colorimetric assays
Nowak, E.; Kawchuk, J.; Hoorfar, M.; Najjaran, H.
2017-01-01
In this work, a low-cost, portable device is developed to detect colorimetric assays for in-field and point-of-care (POC) analysis. The device can rapidly detect both pH values and nitrite concentrations of five different samples, simultaneously. After mixing samples with specific reagents, a high-resolution digital camera collects a picture of the sample, and a single-board computer processes the image in real time to identify the hue–saturation–value coordinates of the image. An internal light source reduces the effect of any ambient light so the device can accurately determine the corresponding pH values or nitrite concentrations. The device was purposefully designed to be low-cost, yet versatile, and the accuracy of the results have been compared to those from a conventional method. The results obtained for pH values have a mean standard deviation of 0.03 and a correlation coefficient R2 of 0.998. The detection of nitrites is between concentrations of 0.4–1.6 mg l−1, with a low detection limit of 0.2 mg l−1, and has a mean standard deviation of 0.073 and an R2 value of 0.999. The results represent great potential of the proposed portable device as an excellent analytical tool for POC colorimetric analysis and offer broad accessibility in resource-limited settings. PMID:29291093
Wu, Qingqing; Xiang, Shengnan; Wang, Wenjun; Zhao, Jinyan; Xia, Jinhua; Zhen, Yueran; Liu, Bang
2018-05-01
Various detection methods have been developed to date for identification of animal species. New techniques based on PCR approach have raised the hope of developing better identification methods, which can overcome the limitations of the existing methods. PCR-based methods used the mitochondrial DNA (mtDNA) as well as nuclear DNA sequences. In this study, by targeting nuclear DNA, multiplex PCR and real-time PCR methods were developed to assist with qualitative and quantitative analysis. The multiplex PCR was found to simultaneously and effectively distinguish four species (fox, dog, mink, and rabbit) ingredients by the different sizes of electrophoretic bands: 480, 317, 220, and 209 bp. Real-time fluorescent PCR's amplification profiles and standard curves showed good quantitative measurement responses and linearity, as indicated by good repeatability and coefficient of determination R 2 > 0.99. The quantitative results of quaternary DNA mixtures including mink, fox, dog, and rabbit DNA are in line with our expectations: R.D. (relative deviation) varied between 1.98 and 12.23% and R.S.D. (relative standard deviation) varied between 3.06 and 11.51%, both of which are well within the acceptance criterion of ≤ 25%. Combining the two methods is suitable for the rapid identification and accurate quantification of fox-, dog-, mink-, and rabbit-derived ingredients in the animal products.
Estimating terpene and terpenoid emissions from conifer oleoresin composition
NASA Astrophysics Data System (ADS)
Flores, Rosa M.; Doskey, Paul V.
2015-07-01
The following algorithm, which is based on the thermodynamics of nonelectrolyte partitioning, was developed to predict emission rates of terpenes and terpenoids from specific storage sites in conifers: Ei =xoriγoripi∘ where Ei is the emission rate (μg C gdw-1 h-1) and pi∘ is the vapor pressure (mm Hg) of the pure liquid terpene or terpenoid, respectively, and xori and γori are the mole fraction and activity coefficient (on a Raoult's law convention), respectively, of the terpene and terpenoid in the oleoresin. Activity coefficients are calculated with Hansen solubility parameters that account for dispersive, polar, and H-bonding interactions of the solutes with the oleoresin matrix. Estimates of pi∘ at 25 °C and molar enthalpies of vaporization are made with the SIMPOL.1 method and are used to estimate pi∘ at environmentally relevant temperatures. Estimated mixing ratios of terpenes and terpenols were comparatively higher above resin-acid- and monoterpene-rich oleoresins, respectively. The results indicated a greater affinity of terpenes and terpenols for the non-functionalized and carboxylic acid containing matrix through dispersive and H-bonding interactions, which are expressed in the emission algorithm by the activity coefficient. The correlation between measured emission rates of terpenes and terpenoids for Pinus strobus and emission rates predicted with the algorithm were very good (R = 0.95). Standard errors for the range and average of monoterpene emission rates were ±6 - ±86% and ±54%, respectively, and were similar in magnitude to reported standard deviations of monoterpene composition of foliar oils (±38 - ±51% and ±67%, respectively).
A fuzzy logic-based model for noise control at industrial workplaces.
Aluclu, I; Dalgic, A; Toprak, Z F
2008-05-01
Ergonomics is a broad science encompassing the wide variety of working conditions that can affect worker comfort and health, including factors such as lighting, noise, temperature, vibration, workstation design, tool design, machine design, etc. This paper describes noise-human response and a fuzzy logic model developed by comprehensive field studies on noise measurements (including atmospheric parameters) and control measures. The model has two subsystems constructed on noise reduction quantity in dB. The first subsystem of the fuzzy model depending on 549 linguistic rules comprises acoustical features of all materials used in any workplace. Totally 984 patterns were used, 503 patterns for model development and the rest 481 patterns for testing the model. The second subsystem deals with atmospheric parameter interactions with noise and has 52 linguistic rules. Similarly, 94 field patterns were obtained; 68 patterns were used for training stage of the model and the rest 26 patterns for testing the model. These rules were determined by taking into consideration formal standards, experiences of specialists and the measurements patterns. The results of the model were compared with various statistics (correlation coefficients, max-min, standard deviation, average and coefficient of skewness) and error modes (root mean square error and relative error). The correlation coefficients were significantly high, error modes were quite low and the other statistics were very close to the data. This statement indicates the validity of the model. Therefore, the model can be used for noise control in any workplace and helpful to the designer in planning stage of a workplace.
Analyzing Spatial and Temporal Variation in Precipitation Estimates in a Coupled Model
NASA Astrophysics Data System (ADS)
Tomkins, C. D.; Springer, E. P.; Costigan, K. R.
2001-12-01
Integrated modeling efforts at the Los Alamos National Laboratory aim to simulate the hydrologic cycle and study the impacts of climate variability and land use changes on water resources and ecosystem function at the regional scale. The integrated model couples three existing models independently responsible for addressing the atmospheric, land surface, and ground water components: the Regional Atmospheric Model System (RAMS), the Los Alamos Distributed Hydrologic System (LADHS), and the Finite Element and Heat Mass (FEHM). The upper Rio Grande Basin, extending 92,000 km2 over northern New Mexico and southern Colorado, serves as the test site for this model. RAMS uses nested grids to simulate meteorological variables, with the smallest grid over the Rio Grande having 5-km horizontal grid spacing. As LADHS grid spacing is 100 m, a downscaling approach is needed to estimate meteorological variables from the 5km RAMS grid for input into LADHS. This study presents daily and cumulative precipitation predictions, in the month of October for water year 1993, and an approach to compare LADHS downscaled precipitation to RAMS-simulated precipitation. The downscaling algorithm is based on kriging, using topography as a covariate to distribute the precipitation and thereby incorporating the topographical resolution achieved at the 100m-grid resolution in LADHS. The results of the downscaling are analyzed in terms of the level of variance introduced into the model, mean simulated precipitation, and the correlation between the LADHS and RAMS estimates. Previous work presented a comparison of RAMS-simulated and observed precipitation recorded at COOP and SNOTEL sites. The effects of downscaling the RAMS precipitation were evaluated using Spearman and linear correlations and by examining the variance of both populations. The study focuses on determining how the downscaling changes the distribution of precipitation compared to the RAMS estimates. Spearman correlations computed for the LADHS and RAMS cumulative precipitation reveal a disassociation over time, with R equal to 0.74 at day eight and R equal to 0.52 at day 31. Linear correlation coefficients (Pearson) returned a stronger initial correlation of 0.97, decreasing to 0.68. The standard deviations for the 2500 LADHS cells underlying each 5km RAMS cell range from 8 mm to 695 mm in the Sangre de Cristo Mountains and 2 mm to 112 mm in the San Luis Valley. Comparatively, the standard deviations of the RAMS estimates in these regions are 247 mm and 30 mm respectively. The LADHS standard deviations provide a measure of the variability introduced through the downscaling routine, which exceeds RAMS regional variability by a factor of 2 to 4. The coefficient of variation for the average LADHS grid cell values and the RAMS cell values in the Sangre de Cristo Mountains are 0.66 and 0.27, respectively, and 0.79 and 0.75 in the San Luis Valley. The coefficients of variation evidence the uniformity of the higher precipitation estimates in the mountains, especially for RAMS, and also the lower means and variability found in the valley. Additionally, Kolmogorov-Smirnov tests indicate clear spatial and temporal differences in mean simulated precipitation across the grid.
NASA Astrophysics Data System (ADS)
Gardner, Stephen J.; Wen, Ning; Kim, Jinkoo; Liu, Chang; Pradhan, Deepak; Aref, Ibrahim; Cattaneo, Richard, II; Vance, Sean; Movsas, Benjamin; Chetty, Indrin J.; Elshaikh, Mohamed A.
2015-06-01
This study was designed to evaluate contouring variability of human-and deformable-generated contours on planning CT (PCT) and CBCT for ten patients with low-or intermediate-risk prostate cancer. For each patient in this study, five radiation oncologists contoured the prostate, bladder, and rectum, on one PCT dataset and five CBCT datasets. Consensus contours were generated using the STAPLE method in the CERR software package. Observer contours were compared to consensus contour, and contour metrics (Dice coefficient, Hausdorff distance, Contour Distance, Center-of-Mass [COM] Deviation) were calculated. In addition, the first day CBCT was registered to subsequent CBCT fractions (CBCTn: CBCT2-CBCT5) via B-spline Deformable Image Registration (DIR). Contours were transferred from CBCT1 to CBCTn via the deformation field, and contour metrics were calculated through comparison with consensus contours generated from human contour set. The average contour metrics for prostate contours on PCT and CBCT were as follows: Dice coefficient—0.892 (PCT), 0.872 (CBCT-Human), 0.824 (CBCT-Deformed); Hausdorff distance—4.75 mm (PCT), 5.22 mm (CBCT-Human), 5.94 mm (CBCT-Deformed); Contour Distance (overall contour)—1.41 mm (PCT), 1.66 mm (CBCT-Human), 2.30 mm (CBCT-Deformed); COM Deviation—2.01 mm (PCT), 2.78 mm (CBCT-Human), 3.45 mm (CBCT-Deformed). For human contours on PCT and CBCT, the difference in average Dice coefficient between PCT and CBCT (approx. 2%) and Hausdorff distance (approx. 0.5 mm) was small compared to the variation between observers for each patient (standard deviation in Dice coefficient of 5% and Hausdorff distance of 2.0 mm). However, additional contouring variation was found for the deformable-generated contours (approximately 5.0% decrease in Dice coefficient and 0.7 mm increase in Hausdorff distance relative to human-generated contours on CBCT). Though deformable contours provide a reasonable starting point for contouring on CBCT, we conclude that contours generated with B-Spline DIR require physician review and editing if they are to be used in the clinic.
NASA Astrophysics Data System (ADS)
Schoellhamer, David H.; Manning, Andrew J.; Work, Paul A.
2017-06-01
Erodibility of cohesive sediment in the Sacramento-San Joaquin River Delta (Delta) was investigated with an erosion microcosm. Erosion depths in the Delta and in the microcosm were estimated to be about one floc diameter over a range of shear stresses and times comparable to half of a typical tidal cycle. Using the conventional assumption of horizontally homogeneous bed sediment, data from 27 of 34 microcosm experiments indicate that the erosion rate coefficient increased as eroded mass increased, contrary to theory. We believe that small erosion depths, erosion rate coefficient deviation from theory, and visual observation of horizontally varying biota and texture at the sediment surface indicate that erosion cannot solely be a function of depth but must also vary horizontally. We test this hypothesis by developing a simple numerical model that includes horizontal heterogeneity, use it to develop an artificial time series of suspended-sediment concentration (SSC) in an erosion microcosm, then analyze that time series assuming horizontal homogeneity. A shear vane was used to estimate that the horizontal standard deviation of critical shear stress was about 30% of the mean value at a site in the Delta. The numerical model of the erosion microcosm included a normal distribution of initial critical shear stress, a linear increase in critical shear stress with eroded mass, an exponential decrease of erosion rate coefficient with eroded mass, and a stepped increase in applied shear stress. The maximum SSC for each step increased gradually, thus confounding identification of a single well-defined critical shear stress as encountered with the empirical data. Analysis of the artificial SSC time series with the assumption of a homogeneous bed reproduced the original profile of critical shear stress, but the erosion rate coefficient increased with eroded mass, similar to the empirical data. Thus, the numerical experiment confirms the small-depth erosion hypothesis. A linear model of critical shear stress and eroded mass is proposed to simulate small-depth erosion, assuming that the applied and critical shear stresses quickly reach equilibrium.
Schoellhamer, David H.; Manning, Andrew J.; Work, Paul A.
2017-01-01
Erodibility of cohesive sediment in the Sacramento-San Joaquin River Delta (Delta) was investigated with an erosion microcosm. Erosion depths in the Delta and in the microcosm were estimated to be about one floc diameter over a range of shear stresses and times comparable to half of a typical tidal cycle. Using the conventional assumption of horizontally homogeneous bed sediment, data from 27 of 34 microcosm experiments indicate that the erosion rate coefficient increased as eroded mass increased, contrary to theory. We believe that small erosion depths, erosion rate coefficient deviation from theory, and visual observation of horizontally varying biota and texture at the sediment surface indicate that erosion cannot solely be a function of depth but must also vary horizontally. We test this hypothesis by developing a simple numerical model that includes horizontal heterogeneity, use it to develop an artificial time series of suspended-sediment concentration (SSC) in an erosion microcosm, then analyze that time series assuming horizontal homogeneity. A shear vane was used to estimate that the horizontal standard deviation of critical shear stress was about 30% of the mean value at a site in the Delta. The numerical model of the erosion microcosm included a normal distribution of initial critical shear stress, a linear increase in critical shear stress with eroded mass, an exponential decrease of erosion rate coefficient with eroded mass, and a stepped increase in applied shear stress. The maximum SSC for each step increased gradually, thus confounding identification of a single well-defined critical shear stress as encountered with the empirical data. Analysis of the artificial SSC time series with the assumption of a homogeneous bed reproduced the original profile of critical shear stress, but the erosion rate coefficient increased with eroded mass, similar to the empirical data. Thus, the numerical experiment confirms the small-depth erosion hypothesis. A linear model of critical shear stress and eroded mass is proposed to simulate small-depth erosion, assuming that the applied and critical shear stresses quickly reach equilibrium.
The second virial coefficient of system ((nitrogen-water))
NASA Astrophysics Data System (ADS)
Podmurnaya, O. A.
2004-01-01
The virial coefficient data of various components of atmosphere are interesting because permit to evaluate a deviation from ideal gas model. These data may be useful while investigating the clusters generation and determination their contribution in absorption. The second cross virial coefficient Baw for system ((nitrogen water)) has been calculated form +9°C to +50°C using the last experimental data about water vapor mole fraction. The reliability of this coefficient has been tested by analysing of errors sources and by comparing the results with other available experimental data.
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.
McClure, Foster D; Lee, Jung K
2006-01-01
A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars
2015-10-01
A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirunyan, Albert M; et al.
A measurement is performed of the cross section of top quark pair production in association with a W or Z boson using proton-proton collisions at a center-of-mass energy of 13 TeV at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fbmore » $$^{-1}$$, collected by the CMS experiment in 2016. The measurement is performed in the same-sign dilepton, three- and four-lepton final states. The production cross sections are measured to be $$\\sigma(\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{W})= 0.77^{+0.12}_{-0.11}\\text{(stat)}^{+0.13}_{-0.12}\\text{(syst)}$$ pb and $$\\sigma(\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{Z})=0.99^{+0.09}_{-0.08}\\text{(stat)}^{+0.12}_{-0.10}\\text{(syst)}$$ pb. The expected (observed) signal significance for the $$\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{W}$$ production in same-sign dilepton channel is found to be 4.5 (5.3) standard deviations, while for the $$\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{Z}$$ production in three- and four-lepton channels both the expected and the observed significances are found to be in excess of 5 standard deviations. The results are in agreement with the standard model predictions and are used to constrain the Wilson coefficients for eight dimension-six operators describing new interactions that would modify $$\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{W}$$ and $$\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{Z}$$ production.« less
Pradhan, Richeek; Singh, Sonal
2018-04-11
Inconsistencies in data on serious adverse events (SAEs) and mortality in ClinicalTrials.gov and corresponding journal articles pose a challenge to research transparency. The objective of this study was to compare data on SAEs and mortality from clinical trials reported in ClinicalTrials.gov and corresponding journal articles with US Food and Drug Administration (FDA) medical reviews. We conducted a cross-sectional study of a randomly selected sample of new molecular entities approved during the study period 1 January 2013 to 31 December 2015. We extracted data on SAEs and mortality from 15 pivotal trials from ClinicalTrials.gov and corresponding journal articles (the two index resources), and FDA medical reviews (reference standard). We estimated the magnitude of deviations in rates of SAEs and mortality between the index resources and the reference standard. We found deviations in rates of SAEs (30% in ClinicalTrials.gov and 30% in corresponding journal articles) and mortality (72% in ClinicalTrials.gov and 53% in corresponding journal articles) when compared with the reference standard. The intra-class correlation coefficient between the three resources was 0.99 (95% confidence interval [CI] 0.98-0.99) for SAE rates and 0.99 (95% CI 0.97-0.99) for mortality rates. There are differences in data on rates of SAEs and mortality in randomized clinical trials in both ClinicalTrials.gov and journal articles compared with FDA reviews. Further efforts should focus on decreasing existing discrepancies to enhance the transparency and reproducibility of data reporting in clinical trials.
NASA Astrophysics Data System (ADS)
Raghu, M. S.; Basavaiah, K.; Ramesh, P. J.; Abdulrahman, Sameer A. M.; Vinay, K. B.
2012-03-01
A sensitive, precise, and cost-effective UV-spectrophotometric method is described for the determination of pheniramine maleate (PAM) in bulk drug and tablets. The method is based on the measurement of absorbance of a PAM solution in 0.1 N HCl at 264 nm. As per the International Conference on Harmonization (ICH) guidelines, the method was validated for linearity, accuracy, precision, limits of detection (LOD) and quantification (LOQ), and robustness and ruggedness. A linear relationship between absorbance and concentration of PAM in the range of 2-40 μg/ml with a correlation coefficient (r) of 0.9998 was obtained. The LOD and LOQ values were found to be 0.18 and 0.39 μg/ml PAM, respectively. The precision of the method was satisfactory: the value of relative standard deviation (RSD) did not exceed 3.47%. The proposed method was applied successfully to the determination of PAM in tablets with good accuracy and precision. Percentages of the label claims ranged from 101.8 to 102.01% with the standard deviation (SD) from 0.64 to 0.72%. The accuracy of the method was further ascertained by recovery studies via a standard addition procedure. In addition, the forced degradation of PAM was conducted in accordance with the ICH guidelines. Acidic and basic hydrolysis, thermal stress, peroxide, and photolytic degradation were used to assess the stability-indicating power of the method. A substantial degradation was observed during oxidative and alkaline degradations. No degradation was observed under other stress conditions.
Technical performance of lactate biosensors and a test-strip device during labour.
Luttkus, A K; Fotopoulou, C; Sehouli, J; Stupin, J; Dudenhausen, J W
2010-04-01
Lactate in fetal blood has a high diagnostic power to detect fetal compromise due to hypoxia, as lactate allows an estimation of duration and intensity of metabolic acidemia. Biosensor technology allows an instantaneous diagnosis of fetal compromise in the delivery room. The goal of the current investigation is to define the preanalytical and analytical biases of this technology under routine conditions in a labour ward in comparison to test-strip technology, which allows measurement of lactate alone. Three lactate biosensors (RapidLab 865, Siemens Medical Solutions Diagnostics, Bad Nauheim, Germany; Radiometer ABL625 and ABL 700, Radiometer Copenhagen, Denmark) and one test-strip device (Lactate Pro, Oxford Instruments, UK) were evaluated regarding precision in serial and repetitive measurements in over 1350 samples of fetal whole blood. The coefficient of variation (CV) and the standard deviation (SD) were calculated. The average value of all three biosensors was defined as an artificial reference value (refval). Blood tonometry was performed in order to test the quality of respiratory parameters and to simulate conditions of fetal hypoxia (pO (2): 10 and 20 mmHg). The precision of serial measurements of all biosensors indicated a coefficient of variation (CV) between 1.55 and 3.16% with an SD from 0.042 to 0.053 mmol/L. The test-strip device (Lactate Pro) mounted to 0.117 mmol/L and 3.99% (SD, CV). When compared to our reference value (refval) ABL 625 showed the closest correlation of -0.1%, while Siemens RapidLab 865 showed an overestimation of +8.9%, ABL700 an underestimation of -6.2% and Lactate Pro of -3.7%. For routine use all tested biosensors show sufficient precision. The test-strip device shows a slightly higher standard deviation. A direct comparison of measured lactate values from the various devices needs to be interpreted with caution as each method detects different lactate concentrations. Furthermore, the 40 min process of tonometry led to an increase of SD and coefficient of variation in all devices. This results in the important preanalytical finding that the precision of replicated measurements worsens significantly with time. The clinician should be aware of the type of analyser used and of preanalytical biases before making clinical decisions on the basis of lactate values.
Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Ghaffari, Farhad
2012-01-01
Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.
Nebuya, S; Noshiro, M; Yonemoto, A; Tateno, S; Brown, B H; Smallwood, R H; Milnes, P
2006-05-01
Inter-subject variability has caused the majority of previous electrical impedance tomography (EIT) techniques to focus on the derivation of relative or difference measures of in vivo tissue resistivity. Implicit in these techniques is the requirement for a reference or previously defined data set. This study assesses the accuracy and optimum electrode placement strategy for a recently developed method which estimates an absolute value of organ resistivity without recourse to a reference data set. Since this measurement of tissue resistivity is absolute, in Ohm metres, it should be possible to use EIT measurements for the objective diagnosis of lung diseases such as pulmonary oedema and emphysema. However, the stability and reproducibility of the method have not yet been investigated fully. To investigate these problems, this study used a Sheffield Mk3.5 system which was configured to operate with eight measurement electrodes. As a result of this study, the absolute resistivity measurement was found to be insensitive to the electrode level between 4 and 5 cm above the xiphoid process. The level of the electrode plane was varied between 2 cm and 7 cm above the xiphoid process. Absolute lung resistivity in 18 normal subjects (age 22.6 +/- 4.9, height 169.1 +/- 5.7 cm, weight 60.6 +/- 4.5 kg, body mass index 21.2 +/- 1.6: mean +/- standard deviation) was measured during both normal and deep breathing for 1 min. Three sets of measurements were made over a period of several days on each of nine of the normal male subjects. No significant differences in absolute lung resistivity were found, either during normal tidal breathing between the electrode levels of 4 and 5 cm (9.3 +/- 2.4 Omega m, 9.6 +/- 1.9 Omega m at 4 and 5 cm, respectively: mean +/- standard deviation) or during deep breathing between the electrode levels of 4 and 5 cm (10.9 +/- 2.9 Omega m and 11.1 +/- 2.3 Omega m, respectively: mean +/- standard deviation). However, the differences in absolute lung resistivity between normal and deep tidal breathing at the same electrode level are significant. No significant difference was found in the coefficient of variation between the electrode levels of 4 and 5 cm (9.5 +/- 3.6%, 8.5 +/- 3.2% at 4 and 5 cm, respectively: mean +/- standard deviation in individual subjects). Therefore, the electrode levels of 4 and 5 cm above the xiphoid process showed reasonable reliability in the measurement of absolute lung resistivity both among individuals and over time.
Nonlinear Elastic Effects on the Energy Flux Deviation of Ultrasonic Waves in GR/EP Composites
NASA Technical Reports Server (NTRS)
Prosser, William H.; Kriz, R. D.; Fitting, Dale W.
1992-01-01
In isotropic materials, the direction of the energy flux (energy per unit time per unit area) of an ultrasonic plane wave is always along the same direction as the normal to the wave front. In anisotropic materials, however, this is true only along symmetry directions. Along other directions, the energy flux of the wave deviates from the intended direction of propagation. This phenomenon is known as energy flux deviation and is illustrated. The direction of the energy flux is dependent on the elastic coefficients of the material. This effect has been demonstrated in many anisotropic crystalline materials. In transparent quartz crystals, Schlieren photographs have been obtained which allow visualization of the ultrasonic waves and the energy flux deviation. The energy flux deviation in graphite/epoxy (gr/ep) composite materials can be quite large because of their high anisotropy. The flux deviation angle has been calculated for unidirectional gr/ep composites as a function of both fiber orientation and fiber volume content. Experimental measurements have also been made in unidirectional composites. It has been further demonstrated that changes in composite materials which alter the elastic properties such as moisture absorption by the matrix or fiber degradation, can be detected nondestructively by measurements of the energy flux shift. In this research, the effects of nonlinear elasticity on energy flux deviation in unidirectional gr/ep composites were studied. Because of elastic nonlinearity, the angle of the energy flux deviation was shown to be a function of applied stress. This shift in flux deviation was modeled using acoustoelastic theory and the previously measured second and third order elastic stiffness coefficients for T300/5208 gr/ep. Two conditions of applied uniaxial stress were considered. In the first case, the direction of applied uniaxial stress was along the fiber axis (x3) while in the second case it was perpendicular to the fiber axis along the laminate stacking direction (x1).
Statistical core design methodology using the VIPRE thermal-hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, M.W.; Feltus, M.A.
1994-12-31
This Penn State Statistical Core Design Methodology (PSSCDM) is unique because it not only includes the EPRI correlation/test data standard deviation but also the computational uncertainty for the VIPRE code model and the new composite box design correlation. The resultant PSSCDM equation mimics the EPRI DNBR correlation results well, with an uncertainty of 0.0389. The combined uncertainty yields a new DNBR limit of 1.18 that will provide more plant operational flexibility. This methodology and its associated correlation and uniqe coefficients are for a very particular VIPRE model; thus, the correlation will be specifically linked with the lumped channel and subchannelmore » layout. The results of this research and methodology, however, can be applied to plant-specific VIPRE models.« less
Fault Identification Based on Nlpca in Complex Electrical Engineering
NASA Astrophysics Data System (ADS)
Zhang, Yagang; Wang, Zengping; Zhang, Jinfang
2012-07-01
The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.
NASA Astrophysics Data System (ADS)
Tsai, Cheng-Mu; Fang, Yi-Chin; Chen, Zhen Hsiang
2011-10-01
This study used the aspheric lens to realize the laser flat-top optimization, and applied the genetic algorithm (GA) to find the optimal results. Using the characteristics of aspheric lens to obtain the optimized high quality Nd: YAG 355 waveband laser flat-top optical system, this study employed the Light tools LDS (least damped square) and the GA of artificial intelligence optimization method to determine the optimal aspheric coefficient and obtain the optimal solution. This study applied the aspheric lens with GA for the flattening of laser beams using two aspheric lenses in the aspheric surface optical system to complete 80% spot narrowing under standard deviation of 0.6142.
Microwave dielectric study of polar liquids at 298 K
NASA Astrophysics Data System (ADS)
Maharolkar, Aruna P.; Murugkar, A.; Khirade, P. W.
2018-05-01
Present paper deals with study of microwave dielectric properties like dielectric constant, viscosity, density and refractive index for the binary mixtures of Dimethylsulphoxide (DMSO) and Methanol over the entire concentration range were measured at 298K. The experimental data further used to determine the excess properties viz. excess static dielectric constant, excess molar volume, excess viscosity& derived properties viz. molar refraction&Bruggman factor. The values of excess properties further fitted with Redlich-Kister (R-K Fit) equation to calculate the binary coefficients and standard deviation. The resulting excess parameters are used to indicate the presence of intermolecular interactions and strength of intermolecular interactions between the molecules in the binary mixtures. Excess parameters indicate structure breaking factor in the mixture predominates in the system.
NASA Astrophysics Data System (ADS)
Wen, D. S.; Wen, H.; Shi, Y. G.; Su, B.; Li, Z. C.; Fan, G. Z.
2018-01-01
The B-spline interpolation fitting baseline in electrochemical analysis by differential pulse voltammetry was established for determining the lower concentration 2,6-di-tert-butyl p-cresol(BHT) in Jet Fuel that was less than 5.0 mg/L in the condition of the presence of the 6-tert-butyl-2,4-xylenol.The experimental results has shown that the relative errors are less than 2.22%, the sum of standard deviations less than 0.134mg/L, the correlation coefficient more than 0.9851. If the 2,6-ditert-butyl p-cresol concentration is higher than 5.0mg/L, linear fitting baseline method would be more applicable and simpler.
Polynomials with Restricted Coefficients and Their Applications
1987-01-01
sums of exponentials of quadratics, he reduced such ýzums to exponentials of linears (geometric sums!) by simplg multiplying by their conjugates...n, the same algebraic manipulations as before lead to rn V`-~ v ie ? --8-- el4V’ .fk ts with = a+(2r+l)t, A = a+(2r+2m+l)t. To estimate the right...coefficients. These random polynomials represent the deviation in frequency response of a linear , equispaced antenna array cauised by coefficient
Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles
NASA Astrophysics Data System (ADS)
Kobayashi, Naoki; Yamazaki, Hiroshi
2018-01-01
We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.
Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.
Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R
2016-11-01
Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.
An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks
Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490
An adaptive handover prediction scheme for seamless mobility based wireless networks.
Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.
COSTA, Yuri Martins; PORPORATTI, André Luís; HILGENBERG-SYDNEY, Priscila Brenner; BONJARDIM, Leonardo Rigoldi; CONTI, Paulo César Rodrigues
2015-01-01
ABSTRACT Low pressure Pain Threshold (PPT) is considered a risk factor for Temporomandibular Disorders (TMD) and is influenced by psychological variables. Objectives To correlate deep pain sensitivity of masticatory muscles with prosthetic factors and Oral-Health-Related Quality of Life (OHRQoL) in completely edentulous subjects. Material and Methods A total of 29 complete denture wearers were recruited. The variables were: a) Pressure Pain Threshold (PPT) of the masseter and temporalis; b) retention, stability, and tooth wear of dentures; c) Vertical Dimension of Occlusion (VDO); d) Oral Health Impact Profile (OHIP) adapted to orofacial pain. The Kolmogorov-Smirnov test, the Pearson Product-Moment correlation coefficient, the Spearman Rank correlation coefficient, the Point-Biserial correlation coefficient, and the Bonferroni correction (α=1%) were applied to the data. Results The mean age (standard deviation) of the participants was of 70.1 years (9.5) and 82% of them were females. There were no significant correlations with prosthetic factors, but significant negative correlations were found between the OHIP and the PPT of the anterior temporalis (r=-0.50, 95% CI-0.73 to 0.17, p=0.005). Discussion The deep pain sensitivity of masticatory muscles in complete dentures wearers is associated with OHRQoL, but not with prosthetic factors. PMID:26814457
Drexler, Judith Z.; Anderson, Frank E.; Snyder, Richard L.
2008-01-01
The surface renewal method was used to estimate evapotranspiration (ET) for a restored marsh on Twitchell Island in the Sacramento–San Joaquin Delta, California, USA. ET estimates for the marsh, together with reference ET measurements from a nearby climate station, were used to determine crop coefficients over a 3‐year period during the growing season. The mean ET rate for the study period was 6 mm day−1, which is high compared with other marshes with similar vegetation. High ET rates at the marsh may be due to the windy, semi‐arid Mediterranean climate of the region, and the permanently flooded nature of the marsh, which results in very low surface resistance of the vegetation. Crop coefficient (Kc) values for the marsh ranged from 0·73 to 1·18. The mean Kc value over the entire study period was 0·95. The daily Kc values for any given month varied from year to year, and the standard deviation of daily Kc values varied between months. Although several climate variables were undoubtedly responsible for this variation, our analysis revealed that wind direction and the temperature of standing water in the wetland were of particular importance in determining ET rates and Kc values.
NASA Astrophysics Data System (ADS)
Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan
2017-10-01
The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.
Cheng, Dengmiao; Feng, Yao; Liu, Yuanwang; Li, Jinpeng; Xue, Jianming; Li, Zhaojun
2018-09-01
Understanding antibiotic adsorption in livestock manures is crucial to assess the fate and risk of antibiotics in the environment. In this study, three quantitative models developed with swine manure-water distribution coefficients (LgK d ) for oxytetracycline (OTC), ciprofloxacin (CIP) and sulfamerazine (SM1) in swine manures. Physicochemical parameters (n=12) of the swine manure were used as independent variables using partial least-squares (PLSs) analysis. The cumulative cross-validated regression coefficients (Q 2 cum ) values, standard deviations (SDs) and external validation coefficient (Q 2 ext ) ranged from 0.761 to 0.868, 0.027 to 0.064, and 0.743 to 0.827 for the three models; as such, internal and external predictability of the models were strong. The pH, soluble organic carbon (SOC) and nitrogen (SON), and Ca were important explanatory variables for the OTC-Model, pH, SOC, and SON for the CIP-model, and pH, total organic nitrogen (TON), and SOC for the SM1-model. The high VIPs (variable importance in the projections) of pH (1.178-1.396), SOC (0.968-1.034), and SON (0.822 and 0.865) established these physicochemical parameters as likely being dominant (associatively) in affecting transport of antibiotics in swine manures. Copyright © 2018 Elsevier B.V. All rights reserved.
40 CFR 90.708 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...
2015-01-01
The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690
NASA Astrophysics Data System (ADS)
Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.
2017-11-01
Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.
Urban Noise Recorded by Stationary Monitoring Stations
NASA Astrophysics Data System (ADS)
Bąkowski, Andrzej; Radziszewski, Leszek; Dekýš, Vladimir
2017-10-01
The paper presents the analysis results of equivalent sound level recorded by two road traffic noise monitoring stations. The stations were located in Kielce (an example of a medium-size town in Poland) at the roads in the town in the direction of Łódź and Lublin. The measurements were carried out through stationary stations monitoring the noise and traffic of motor vehicles. The RMS values based on A-weighted sound level were recorded every 1 s in the buffer and the results were registered every 1 min over the period of investigations. The registered data were the basis for calculating the equivalent sound level for three time intervals: from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00. Analysis included the values of the equivalent sound level recorded for different days of the week split into 24h periods, nights, days and evenings. The data analysed included recordings from 2013. The agreement of the distribution of the variable under analysis with normal distribution was evaluated. It was demonstrated that in most cases (for both roads) there was sufficient evidence to reject the null hypothesis at the significance level of 0.05. It was noted that compared with Łódź Road, in the case of Lublin Road data, more cases were recorded for which the null hypothesis could not be rejected. Uncertainties of the equivalent sound level measurements were compared within the periods under analysis. The standard deviation, coefficient of variation, the positional coefficient of variation, the quartile deviation was proposed for performing a comparative analysis of the obtained data scattering. The investigations indicated that the recorded data varied depending on the traffic routes and time intervals. The differences concerned the values of uncertainties and coefficients of variation of the equivalent sound levels.
Determining the response of sea level to atmospheric pressure forcing using TOPEX/POSEIDON data
NASA Technical Reports Server (NTRS)
Fu, Lee-Lueng; Pihos, Greg
1994-01-01
The static response of sea level to the forcing of atmospheric pressure, the so-called inverted barometer (IB) effect, is investigated using TOPEX/POSEIDON data. This response, characterized by the rise and fall of sea level to compensate for the change of atmospheric pressure at a rate of -1 cm/mbar, is not associated with any ocean currents and hence is normally treated as an error to be removed from sea level observation. Linear regression and spectral transfer function analyses are applied to sea level and pressure to examine the validity of the IB effect. In regions outside the tropics, the regression coefficient is found to be consistently close to the theoretical value except for the regions of western boundary currents, where the mesoscale variability interferes with the IB effect. The spectral transfer function shows near IB response at periods of 30 degrees is -0.84 +/- 0.29 cm/mbar (1 standard deviation). The deviation from = 1 cm /mbar is shown to be caused primarily by the effect of wind forcing on sea level, based on multivariate linear regression model involving both pressure and wind forcing. The regression coefficient for pressure resulting from the multivariate analysis is -0.96 +/- 0.32 cm/mbar. In the tropics the multivariate analysis fails because sea level in the tropics is primarily responding to remote wind forcing. However, after removing from the data the wind-forced sea level estimated by a dynamic model of the tropical Pacific, the pressure regression coefficient improves from -1.22 +/- 0.69 cm/mbar to -0.99 +/- 0.46 cm/mbar, clearly revealing an IB response. The result of the study suggests that with a proper removal of the effect of wind forcing the IB effect is valid in most of the open ocean at periods longer than 20 days and spatial scales larger than 500 km.
Liyanaarachchi, G V V; Mahanama, K R R; Somasiri, H P P S; Punyasiri, P A N
2018-02-01
The study presents the validation results of the method carried out for analysis of free amino acids (FAAs) in rice using l-theanine as the internal standard (IS) with o-phthalaldehyde (OPA) reagent using high-performance liquid chromatography-fluorescence detection. The detection and quantification limits of the method were in the range 2-16μmol/kg and 3-19μmol/kg respectively. The method had a wide working range from 25 to 600μmol/kg for each individual amino acid, and good linearity with regression coefficients greater than 0.999. Precision measured in terms of repeatability and reproducibility, expressed as percentage relative standard deviation (% RSD) was below 9% for all the amino acids analyzed. The recoveries obtained after fortification at three concentration levels were in the range 75-105%. In comparison to l-norvaline, findings revealed that l-theanine is suitable as an IS and the validated method can be used for FAA determination in rice. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bismuth as a general internal standard for lead in atomic absorption spectrometry.
Bechlin, Marcos A; Fortunato, Felipe M; Ferreira, Edilene C; Gomes Neto, José A; Nóbrega, Joaquim A; Donati, George L; Jones, Bradley T
2014-06-11
Bismuth was evaluated as internal standard for Pb determination by line source flame atomic absorption spectrometry (LS FAAS), high-resolution continuum source flame atomic absorption spectrometry (HR-CS FAAS) and line source graphite furnace atomic absorption spectrometry (LS GFAAS). Analysis of samples containing different matrices indicated close relationship between Pb and Bi absorbances. Correlation coefficients of calibration curves built up by plotting A(Pb)/A(Bi)versus Pb concentration were higher than 0.9953 (FAAS) and higher than 0.9993 (GFAAS). Recoveries of Pb improved from 52-118% (without IS) to 97-109% (IS, LS FAAS); 74-231% (without IS) to 96-109% (IS, HR-CS FAAS); and 36-125% (without IS) to 96-110% (IS, LS GFAAS). The relative standard deviations (n=12) were reduced from 0.6-9.2% (without IS) to 0.3-4.3% (IS, LS FAAS); 0.7-7.7% (without IS) to 0.1-4.0% (IS, HR-CS FAAS); and 2.1-13% (without IS) to 0.4-5.9% (IS, LS GFAAS). Copyright © 2014 Elsevier B.V. All rights reserved.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
N2/O2/H2 Dual-Pump Cars: Validation Experiments
NASA Technical Reports Server (NTRS)
OByrne, S.; Danehy, P. M.; Cutler, A. D.
2003-01-01
The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.
Lociciro, S; Esseiva, P; Hayoz, P; Dujourdy, L; Besacier, F; Margot, P
2008-05-20
Harmonisation and optimization of analytical and statistical methodologies were carried out between two forensic laboratories (Lausanne, Switzerland and Lyon, France) in order to provide drug intelligence for cross-border cocaine seizures. Part I dealt with the optimization of the analytical method and its robustness. This second part investigates statistical methodologies that will provide reliable comparison of cocaine seizures analysed on two different gas chromatographs interfaced with a flame ionisation detectors (GC-FIDs) in two distinct laboratories. Sixty-six statistical combinations (ten data pre-treatments followed by six different distance measurements and correlation coefficients) were applied. One pre-treatment (N+S: area of each peak is divided by its standard deviation calculated from the whole data set) followed by the Cosine or Pearson correlation coefficients were found to be the best statistical compromise for optimal discrimination of linked and non-linked samples. The centralisation of the analyses in one single laboratory is not a required condition anymore to compare samples seized in different countries. This allows collaboration, but also, jurisdictional control over data.
Polynomial sequences for bond percolation critical thresholds
Scullard, Christian R.
2011-09-22
In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less
Combined experiment Phase 2 data characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M.S.; Shipley, D.E.; Young, T.S.
1995-11-01
The National Renewable Energy Laboratory`s ``Combined Experiment`` has yielded a large quantity of experimental data on the operation of a downwind horizontal axis wind turbine under field conditions. To fully utilize this valuable resource and identify particular episodes of interest, a number of databases were created that characterize individual data events and rotational cycles over a wide range of parameters. Each of the 59 five-minute data episodes collected during Phase 11 of the Combined Experiment have been characterized by the mean, minimum, maximum, and standard deviation of all data channels, except the blade surface pressures. Inflow condition, aerodynamic force coefficient,more » and minimum leading edge pressure coefficient databases have also been established, characterizing each of nearly 21,000 blade rotational cycles. In addition, a number of tools have been developed for searching these databases for particular episodes of interest. Due to their extensive size, only a portion of the episode characterization databases are included in an appendix, and examples of the cycle characterization databases are given. The search tools are discussed and the FORTRAN or C code for each is included in appendices.« less
Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.
Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O
2009-04-01
Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.
2017-01-01
Anthropometric data collected in clinics and surveys are often inaccurate and unreliable due to measurement error. The Body Imaging for Nutritional Assessment Study (BINA) evaluated the ability of 3D imaging to correctly measure stature, head circumference (HC) and arm circumference (MUAC) for children under five years of age. This paper describes the protocol for and the quality of manual anthropometric measurements in BINA, a study conducted in 2016–17 in Atlanta, USA. Quality was evaluated by examining digit preference, biological plausibility of z-scores, z-score standard deviations, and reliability. We calculated z-scores and analyzed plausibility based on the 2006 WHO Child Growth Standards (CGS). For reliability, we calculated intra- and inter-observer Technical Error of Measurement (TEM) and Intraclass Correlation Coefficient (ICC). We found low digit preference; 99.6% of z-scores were biologically plausible, with z-score standard deviations ranging from 0.92 to 1.07. Total TEM was 0.40 for stature, 0.28 for HC, and 0.25 for MUAC in centimeters. ICC ranged from 0.99 to 1.00. The quality of manual measurements in BINA was high and similar to that of the anthropometric data used to develop the WHO CGS. We attributed high quality to vigorous training, motivated and competent field staff, reduction of non-measurement error through the use of technology, and reduction of measurement error through adequate monitoring and supervision. Our anthropometry measurement protocol, which builds on and improves upon the protocol used for the WHO CGS, can be used to improve anthropometric data quality. The discussion illustrates the need to standardize anthropometric data quality assessment, and we conclude that BINA can provide a valuable evaluation of 3D imaging for child anthropometry because there is comparison to gold-standard, manual measurements. PMID:29240796
Zhu, Xiangyu; Nordstrom, D. Kirk; McCleskey, R. Blaine; Wang, Rucheng
2016-01-01
Arsenic is known to be one of the most toxic inorganic elements, causing worldwide environmental contamination. However, many fundamental properties related to aqueous arsenic species are not well known which will inhibit our ability to understand the geochemical behavior of arsenic (e.g. speciation, transport, and solubility). Here, the electrical conductivity of Na2HAsO4 solutions has been measured over the concentration range of 0.001–1 mol kg−1 and the temperature range of 5–90°C. Ionic strength and temperature-dependent equations were derived for the molal conductivity of HAsO42−and H2AsO4− aqueous ions. Combined with speciation calculations and the approach used by McCleskey et al. (2012b), these equations can be used to calculate the electrical conductivities of arsenic-rich waters having a large range of effective ionic strengths (0.001–3 mol kg−1) and temperatures (5–90°C). Individual ion activity coefficients for HAsO42− and H2AsO4− in the form of the Hückel equation were also derived using the mean salt method and the mean activity coefficients of K2HAsO4 (0.001–1 mol kg−1) and KH2AsO4 (0.001–1.3 mol kg−1). A check on these activity coefficients was made by calculating mean activity coefficients for Na2HAsO4 and NaH2AsO4 solutions and comparing them to measured values. At the same time Na-arsenate complexes were evaluated. The NaH2AsO40 ion pair is negligible in NaH2AsO4 solutions up to 1.3 mol kg−1. The NaHAsO4− ion pair is important in NaHAsO4 solutions >0.1 mol kg−1 and the formation constant of 100.69 was confirmed. The enthalpy, entropy, free energy and heat capacity for the second and third arsenic acid dissociation reactions were calculated from pH measurements. These properties have been incorporated into a widely used geochemical calculation code WATEQ4F and applied to natural arsenic waters. For arsenic spiked water samples from Yellowstone National Park, the mean difference between the calculated and measured conductivities have been improved from −18% to −1.0% with a standard deviation of 2.4% and the mean charge balances have been improved from 28% to 0.6% with a standard deviation of 1.5%.
SU-F-BRE-14: Uncertainty Analysis for Dose Measurements Using OSLD NanoDots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kry, S; Alvarez, P; Stingo, F
2014-06-15
Purpose: Optically stimulated luminescent dosimeters (OSLD) are an increasingly popular dosimeter for research and clinical applications. It is also used by the Radiological Physics Center for remote auditing of machine output. In this work we robustly calculated the reproducibility and uncertainty of the OSLD nanoDot. Methods: For the RPC dose calculation, raw readings are corrected for depletion, element sensitivity, fading, linearity, and energy. System calibration is determined for the experimental OSLD irradiated at different institutions by using OSLD irradiated by the RPC under reference conditions (i.e., standards): 1 Gy in a Cobalt beam. The intra-dot and inter-dot reproducibilities (coefficient ofmore » variation) were determined from the history of RPC readings of these standards. The standard deviation of the corrected OSLD signal was then calculated analytically using a recursive formalism that did not rely on the normality assumption of the underlying uncertainties, or on any type of mathematical approximation. This analytical uncertainty was compared to that empirically estimated from >45,000 RPC beam audits. Results: The intra-dot variability was found to be 0.59%, with only a small variation between readers. Inter-dot variability was found to be 0.85%. The uncertainty in each of the individual correction factors was empirically determined. When the raw counts from each OSLD were adjusted for the appropriate correction factors, the analytically determined coefficient of variation was 1.8% over a range of institutional irradiation conditions that are seen at the RPC. This is reasonably consistent with the empirical observations of the RPC, where the coefficient of variation of the measured beam outputs is 1.6% (photons) and 1.9% (electrons). Conclusion: OSLD nanoDots provide sufficiently good precision for a wide range of applications, including the RPC remote monitoring program for megavoltage beams. This work was supported by PHS grant CA10953 awarded by the NIH (DHHS)« less
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
Factors in Variability of Serial Gabapentin Concentrations in Elderly Patients with Epilepsy.
Conway, Jeannine M; Eberly, Lynn E; Collins, Joseph F; Macias, Flavia M; Ramsay, R Eugene; Leppik, Ilo E; Birnbaum, Angela K
2017-10-01
To characterize and quantify the variability of serial gabapentin concentrations in elderly patients with epilepsy. This study included 83 patients (age ≥ 60 yrs) from an 18-center randomized double-blind double-dummy parallel study from the Veterans Affairs Cooperative 428 Study. All patients were taking 1500 mg/day gabapentin. Within-person coefficient of variation (CV) in gabapentin concentrations, measured weekly to bimonthly for up to 52 weeks, then quarterly, was computed. Impact of patient characteristics on gabapentin concentrations (linear mixed model) and CV (linear regression) were estimated. A total of 482 gabapentin concentration measurements were available for analysis. Gabapentin concentrations and intrapatient CVs ranged from 0.5 to 22.6 μg/ml (mean 7.9 μg/ml, standard deviation [SD] 4.1 μg/ml) and 2% to 79% (mean 27.9%, SD 15.3%), respectively, across all visits. Intrapatient CV was higher by 7.3% for those with a body mass index of ≥ 30 kg/m 2 (coefficient = 7.3, p=0.04). CVs were on average 0.5% higher for each 1-unit higher CV in creatinine clearance (coefficient = 0.5, p=0.03) and 1.2% higher for each 1-hour longer mean time after dose (coefficient = 1.2, p=0.04). Substantial intrapatient variability in serial gabapentin concentration was noted in elderly patients with epilepsy. Creatinine clearance, time of sampling relative to dose, and obesity were found to be positively associated with variability. © 2017 Pharmacotherapy Publications, Inc.
Salmelin, Johanna; Vuori, Kari-Matti; Hämäläinen, Heikki
2015-08-01
The incidence of morphological deformities of chironomid larvae as an indicator of sediment toxicity has been studied for decades. However, standards for deformity analysis are lacking. The authors evaluated whether 25 experts diagnosed larval deformities in a similar manner. Based on high-quality digital images, the experts rated 211 menta of Chironomus spp. larvae as normal or deformed. The larvae were from a site with polluted sediments or from a reference site. The authors revealed this to a random half of the experts, and the rest conducted the assessment blind. The authors quantified the interrater agreement by kappa coefficient, tested whether open and blind assessments differed in deformity incidence and in differentiation between the sites, and identified those deformity types rated most consistently or inconsistently. The total deformity incidence varied greatly, from 10.9% to 66.4% among experts. Kappa coefficient across rater pairs averaged 0.52, indicating insufficient agreement. The deformity types rated most consistently were those missing teeth or with extra teeth. The open and blind assessments did not differ, but differentiation between sites was clearest for raters who counted primarily absolute deformities such as missing and extra teeth and excluded apparent mechanical aberrations or deviations in tooth size or symmetry. The highly differing criteria in deformity assignment have likely led to inconsistent results in midge larval deformity studies and indicate an urgent need for standardization of the analysis. © 2015 SETAC.
Lu, Hsueh-Kuan; Chen, Yu-Yawn; Yeh, Chinagwen; Chuang, Chih-Lin; Chiang, Li-Ming; Lai, Chung-Liang; Casebolt, Kevin M; Huang, Ai-Chun; Lin, Wen-Long; Hsieh, Kuen-Chang
2017-08-22
The aim of this study was to evaluate leg-to-leg bioelectrical impedance analysis (LBIA) using a four-contact electrode system for measuring abdominal visceral fat area (VFA). The present study recruited 381 (240 male and 141 female) Chinese participants to compare VFA measurements estimated by a standing LBIA system (VFALBIA) with computerized tomography (CT) scanned at the L4-L5 vertebrae (VFA CT ). The total mean body mass index (BMI) was 24.7 ± 4.2 kg/m 2 . Correlation analysis, regression analysis, Bland-Altman plot, and paired sample t-tests were used to analyze the accuracy of the VFA LBIA . For the total subjects, the regression line was VFA LBIA = 0.698 VFA CT + 29.521, (correlation coefficient (r) = 0.789, standard estimate of error (SEE) = 24.470 cm 2 , p < 0.001), Lin's correlation coefficient (CCC) was 0.785; and the limit of agreement (LOA; mean difference ±2 standard deviation) ranged from -43.950 to 67.951 cm 2 , LOA% (given as a percentage of mean value measured by the CT) was 48.2%. VFA LBIA and VFA CT showed significant difference (p < 0.001). Collectively, the current study indicates that LBIA has limited potential to accurately estimate visceral fat in a clinical setting.
Peng, Lingling; Li, Yi; Feng, Hao
2017-07-14
Reference crop evapotranspiration (ET o ) is a critically important parameter for climatological, hydrological and agricultural management. The FAO56 Penman-Monteith (PM) equation has been recommended as the standardized ET o (ET o,s ) equation, but it has a high requirements of climatic data. There is a practical need for finding a best alternative method to estimate ET o in the regions where full climatic data are lacking. A comprehensive comparison for the spatiotemporal variations, relative errors, standard deviations and Nash-Sutcliffe efficacy coefficients of monthly or annual ET o,s and ET o,i (i = 1, 2, …, 10) values estimated by 10 selected methods (i.e., Irmak et al., Makkink, Priestley-Taylor, Hargreaves-Samani, Droogers-Allen, Berti et al., Doorenbos-Pruitt, Wright and Valiantzas, respectively) using data at 552 sites over 1961-2013 in mainland China. The method proposed by Berti et al. (2014) was selected as the best alternative of FAO56-PM because it was simple in computation process, only utilized temperature data, had generally good accuracy in describing spatiotemporal characteristics of ET o,s in different sub-regions and mainland China, and correlated linearly to the FAO56-PM method very well. The parameters of the linear correlations between ET o of the two methods are calibrated for each site with the smallest determination of coefficient being 0.87.
Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Thompson, Bruce
2009-01-01
Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…
30 CFR 74.8 - Measurement, accuracy, and reliability requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...
Lamparczyk, H; Chmielewska, A; Konieczna, L; Plenis, A; Zarzycki, P K
2001-12-01
A rapid and sensitive reversed-phase high performance liquid chromatographic method has been developed for the determination of metoclopramide in serum. The assay was performed after single extraction with ethyl ether using methyl parahydroxybenzoate as internal standard. Chromatographic separations were performed on C(18) stationary phase with a mobile phase composed of methanol-phosphate buffer pH 3 (30:70 v/v). Analytes were detected electrochemically. The quantification limit for metoclopramide in serum was 2 ng mL(-1). Linearity of the method was confirmed in the range of 5-120 ng mL(-1) (correlation coefficient 0.9998). Within-day relative standard deviations (RSDs) ranged from 0.3 to 5.5% and between-day RSDs from 0.8 to 6.0%. The analytical method was successfully applied for the determination of pharmacokinetic parameters after ingestion of 10 mg dose of metoclopramide. Studies were performed on 18 healthy volunteers of both sexes. Copyright 2001 John Wiley & Sons, Ltd.
Deng, Shixin; West, Brett J; Jensen, C Jarakae
2008-11-15
The leaves of Morinda citrifolia L. (noni) have been utilized in a variety of commercial products marketed for their health benefits. This paper reports on a rapid and selective HPLC method for simultaneous characterization and quantitation of four flavonols in an ethanolic extract of noni leaves by using dual detectors of UV (365nm) and ESI-MS (negative mode). The limits of detection and quantitation were between 0.012 and 0.165μg/mL. The intra- and inter-assay precisions, in terms of percent relative standard deviation, are less than 4.38% and 3.50%, respectively. The accuracy, in terms of recovery percentage, ranged from 96.66% to 100.03%. Good linearity (correlation coefficient >0.999) for each calibration curve of standards was achieved in the range investigated. The contents of four flavonoids in the noni leaves varied from 1.16 to 371.6mg/100g dry weight. Copyright © 2008 Elsevier Ltd. All rights reserved.
Alamgir, Malik; Khuhawar, Muhammad Yar; Memon, Saima Q; Hayat, Amir; Zounr, Rizwan Ali
2015-01-05
A sensitive and simple spectrofluorimetric method has been developed for the analysis of famotidine, from pharmaceutical preparations and biological fluids after derivatization with benzoin. The reaction was carried out in alkaline medium with measurement of fluorescence intensity at 446 nm with excitation wavelength at 286 nm. Linear calibration was obtained with 0.5-15 μg/ml with coefficient of determination (r(2)) 0.997. The factors affecting the fluorescence intensity were optimized. The pharmaceutical additives and amino acid did not interfere in the determination. The mean percentage recovery (n=4) calculated by standard addition from pharmaceutical preparation was 94.8-98.2% with relative standard deviation (RSD) 1.56-3.34% and recovery from deproteinized spiked serum and urine of healthy volunteers was 98.6-98.9% and 98.0-98.4% with RSD 0.34-0.84% and 0.29-0.87% respectively. Copyright © 2014 Elsevier B.V. All rights reserved.
The effects of auditory stimulation with music on heart rate variability in healthy women.
Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de
2013-07-01
There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.
The effects of auditory stimulation with music on heart rate variability in healthy women
Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos
2013-01-01
OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660
Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J
2016-03-01
Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.
USL/DBMS NASA/PC R and D project C programming standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.
NASA Astrophysics Data System (ADS)
Lu, Xian; Chu, Xinzhao; Li, Haoyu; Chen, Cao; Smith, John A.; Vadas, Sharon L.
2017-09-01
We present the first statistical study of gravity waves with periods of 0.3-2.5 h that are persistent and dominant in the vertical winds measured with the University of Colorado STAR Na Doppler lidar in Boulder, CO (40.1°N, 105.2°W). The probability density functions of the wave amplitudes in temperature and vertical wind, ratios of these two amplitudes, phase differences between them, and vertical wavelengths are derived directly from the observations. The intrinsic period and horizontal wavelength of each wave are inferred from its vertical wavelength, amplitude ratio, and a designated eddy viscosity by applying the gravity wave polarization and dispersion relations. The amplitude ratios are positively correlated with the ground-based periods with a coefficient of 0.76. The phase differences between the vertical winds and temperatures (φW -φT) follow a Gaussian distribution with 84.2±26.7°, which has a much larger standard deviation than that predicted for non-dissipative waves ( 3.3°). The deviations of the observed phase differences from their predicted values for non-dissipative waves may indicate wave dissipation. The shorter-vertical-wavelength waves tend to have larger phase difference deviations, implying that the dissipative effects are more significant for shorter waves. The majority of these waves have the vertical wavelengths ranging from 5 to 40 km with a mean and standard deviation of 18.6 and 7.2 km, respectively. For waves with similar periods, multiple peaks in the vertical wavelengths are identified frequently and the ones peaking in the vertical wind are statistically longer than those peaking in the temperature. The horizontal wavelengths range mostly from 50 to 500 km with a mean and median of 180 and 125 km, respectively. Therefore, these waves are mesoscale waves with high-to-medium frequencies. Since they have recently become resolvable in high-resolution general circulation models (GCMs), this statistical study provides an important and timely reference for them.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hursin, M.; Koeberl, O.; Perret, G.
2012-07-01
High Conversion Light Water Reactors (HCLWR) allows a better usage of fuel resources thanks to a higher breeding ratio than standard LWR. Their uses together with the current fleet of LWR constitute a fuel cycle thoroughly studied in Japan and the US today. However, one of the issues related to HCLWR is their void reactivity coefficient (VRC), which can be positive. Accurate predictions of void reactivity coefficient in HCLWR conditions and their comparisons with representative experiments are therefore required. In this paper an inter comparison of modern codes and cross-section libraries is performed for a former Benchmark on Void Reactivitymore » Effect in PWRs conducted by the OECD/NEA. It shows an overview of the k-inf values and their associated VRC obtained for infinite lattice calculations with UO{sub 2} and highly enriched MOX fuel cells. The codes MCNPX2.5, TRIPOLI4.4 and CASMO-5 in conjunction with the libraries ENDF/B-VI.8, -VII.0, JEF-2.2 and JEFF-3.1 are used. A non-negligible spread of results for voided conditions is found for the high content MOX fuel. The spread of eigenvalues for the moderated and voided UO{sub 2} fuel are about 200 pcm and 700 pcm, respectively. The standard deviation for the VRCs for the UO{sub 2} fuel is about 0.7% while the one for the MOX fuel is about 13%. This work shows that an appropriate treatment of the unresolved resonance energy range is an important issue for the accurate determination of the void reactivity effect for HCLWR. A comparison to experimental results is needed to resolve the presented discrepancies. (authors)« less
SU-E-T-677: Reproducibility of Production of Ionization Chambers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kukolowicz, P; Bulski, W; Ulkowski, P
Purpose: To compare the reproducibility of the production of several cylindrical and plane-parallel chambers popular in Poland in terms of a calibration coefficient. Methods: The investigation was performed for PTW30013 (20 chambers), 30001 (10 chambers), FC65-G (17 chambers) cylindrical chambers and for PPC05 (14 chambers), Roos 34001 (8 chambers) plane parallel chambers. The calibration factors were measured at the same accredited secondary standard laboratory in terms of a dose to water. All the measurements were carried out at the same laboratory, by the same staff, in accordance with the same IAEA recommendations. All the chambers were calibrated in the Co60more » beam. Reproducibility was described in terms of the mean value, its standard deviation and the ratio of the maximum and minimum value of calibration factors for each set of chambers separately. The combined uncertainty budged (1SD) calculated according to the IAEA-TECDOC-1585 of the calibration factor was of 0.25%. Results: The calibration coefficients for PTW30013, 30001, and FC65-G chambers were 5.36±0.03, 5.28±0.06, 4.79±0.015 nC/Gy respectively and for PPC05, and Roos chambers were 59±2, 8.3±0.1 nC/Gy respectively. The maximum/minimum ratio of calibration factors for PTW30013, 30001, FC65-G, and for PPC05, Roos chambers were 1.03, 1.03, 1.01, 1.14 and 1.03 respectively. Conclusion: The production of all ion chambers was very reproducible except the Markus type PPC05 for which the ratio of maximum/minimum calibration coefficients of 1.14 was obtained.« less
Dosimetry for Small and Nonstandard Fields
NASA Astrophysics Data System (ADS)
Junell, Stephanie L.
The proposed small and non-standard field dosimetry protocol from the joint International Atomic Energy Agency (IAEA) and American Association of Physicist in Medicine working group introduces new reference field conditions for ionization chamber based reference dosimetry. Absorbed dose beam quality conversion factors (kQ factors) corresponding to this formalism were determined for three different models of ionization chambers: a Farmer-type ionization chamber, a thimble ionization chamber, and a small volume ionization chamber. Beam quality correction factor measurements were made in a specially developed cylindrical polymethyl methacrylate (PMMA) phantom and a water phantom using thermoluminescent dosimeters (TLDs) and alanine dosimeters to determine dose to water. The TLD system for absorbed dose to water determination in high energy photon and electron beams was fully characterized as part of this dissertation. The behavior of the beam quality correction factor was observed as it transfers the calibration coefficient from the University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) 60Co reference beam to the small field calibration conditions of the small field formalism. TLD-determined beam quality correction factors for the calibration conditions investigated ranged from 0.97 to 1.30 and had associated standard deviations from 1% to 3%. The alanine-determined beam quality correction factors ranged from 0.996 to 1.293. Volume averaging effects were observed with the Farmer-type ionization chamber in the small static field conditions. The proposed small and non-standard field dosimetry protocols new composite-field reference condition demonstrated its potential to reduce or remove ionization chamber volume dependancies, but the measured beam quality correction factors were not equal to the standard CoP's kQ, indicating a change in beam quality in the small and non-standard field dosimetry protocols new composite-field reference condition relative to the standard broad beam reference conditions. The TLD- and alanine-determined beam quality correction factors in the composite-field reference conditions were approximately 3% greater and differed by more than one standard deviation from the published TG-51 kQ values for all three chambers.
Nonlinear elastic effects on the energy flux deviation of ultrasonic waves in gr/ep composites
NASA Technical Reports Server (NTRS)
Prosser, William H.; Kriz, R. D.; Fitting, Dale W.
1992-01-01
The effects of nonlinear elasticity on energy flux deviation in undirectional gr/ep composites are examined. The shift in the flux deviation is modeled using acoustoelasticity theory and the second- and third-order elastic stiffness coefficients for T300/5208 gr/ep. Two conditions of applied uniaxial stress are considered. In the first case, the direction of applied uniaxial stress was along the fiber axis (x3), while in the second case it was perpendicular to the fiber axis along the laminate stacking direction (x1). For both conditions, the change in the energy flux deviation angle from the condition of zero applied stress is computed over the range of propagation directions of 0 to 60 deg from the fiber axis at two-degree intervals. A positive flux deviation angle implies the energy deviates away from the fiber direction toward the x1 axis, while a negative deviation means that the energy deviates toward the fibers. Over this range of fiber orientation angles, the energy of the quasi-longitudinal and pure mode transverse waves deviates toward the fibers, while that of the quasi-transverse mode deviates away from the fibers.
Prediction of Soil pH Hyperspectral Spectrum in Guanzhong Area of Shaanxi Province Based on PLS
NASA Astrophysics Data System (ADS)
Liu, Jinbao; Zhang, Yang; Wang, Huanyuan; Cheng, Jie; Tong, Wei; Wei, Jing
2017-12-01
The soil pH of Fufeng County, Yangling County and Wugong County in Shaanxi Province was studied. The spectral reflectance was measured by ASD Field Spec HR portable terrain spectrum, and its spectral characteristics were analyzed. The first deviation of the original spectral reflectance of the soil, the second deviation, the logarithm of the reciprocal logarithm, the first order differential of the reciprocal logarithm and the second order differential of the reciprocal logarithm were used to establish the soil pH Spectral prediction model. The results showed that the correlation between the reflectance spectra after SNV pre-treatment and the soil pH was significantly improved. The optimal prediction model of soil pH established by partial least squares method was a prediction model based on the first order differential of the reciprocal logarithm of spectral reflectance. The principal component factor was 10, the decision coefficient Rc2 = 0.9959, the model root means square error RMSEC = 0.0076, the correction deviation SEC = 0.0077; the verification decision coefficient Rv2 = 0.9893, the predicted root mean square error RMSEP = 0.0157, The deviation of SEP = 0.0160, the model was stable, the fitting ability and the prediction ability were high, and the soil pH can be measured quickly.
NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
Lim, Wei-Wen; Baumert, Mathias; Neo, Melissa; Kuklik, Pawel; Ganesan, Anand N; Lau, Dennis H; Tsoutsman, Tatiana; Semsarian, Christopher; Sanders, Prashanthan; Saint, David A
2016-01-01
Hypertrophic cardiomyopathy (HCM) is a common heritable cardiac disorder with diverse clinical outcomes including sudden death, heart failure, and stroke. Depressed heart rate variability (HRV), a measure of cardiac autonomic regulation, has been shown to predict mortality in patients with cardiovascular disease. Cardiac autonomic remodelling in animal models of HCM are not well characterised. This study analysed Gly203Ser cardiac troponin-I transgenic (TG) male mice previously demonstrated to develop hallmarks of HCM by age 21 weeks. 33 mice aged 30 and 50 weeks underwent continuous electrocardiogram (ECG) recording for 30 min under anaesthesia. TG mice demonstrated prolonged P-wave duration (P < 0.001) and PR intervals (P < 0.001) compared to controls. Additionally, TG mice demonstrated depressed standard deviation of RR intervals (SDRR; P < 0.01), coefficient of variation of RR intervals (CVRR; P < 0.001) and standard deviation of heart rate (SDHR; P < 0.001) compared to controls. Additionally, total power was significantly reduced in TG mice (P < 0.05). No significant age-related difference in either strain was observed in ECG or HRV parameters. Mice with HCM developed slowed atrial and atrioventricular conduction and depressed HRV. These changes were conserved with increasing age. This finding may be indicative of atrial and ventricular hypertrophy or dysfunction, and perhaps an indication of worse clinical outcome in heart failure progression in HCM patients. © 2015 Wiley Publishing Asia Pty Ltd.
Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen
2014-01-01
very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.
Methodology for the development of normative data for Spanish-speaking pediatric populations.
Rivera, D; Arango-Lasprilla, J C
2017-01-01
To describe the methodology utilized to calculate reliability and the generation of norms for 10 neuropsychological tests for children in Spanish-speaking countries. The study sample consisted of over 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Inclusion criteria for all countries were to have between 6 to 17 years of age, an Intelligence Quotient of≥80 on the Test of Non-Verbal Intelligence (TONI-2), and score of <19 on the Children's Depression Inventory. Participants completed 10 neuropsychological tests. Reliability and norms were calculated for all tests. Test-retest analysis showed excellent or good- reliability on all tests (r's>0.55; p's<0.001) except M-WCST perseverative errors whose coefficient magnitude was fair. All scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the models by country. The non-significant variables (p > 0.05) were removed and the analysis were run again. This is the largest Spanish-speaking children and adolescents normative study in the world. For the generation of normative data, the method based on linear regression models and the standard deviation of residual values was used. This method allows determination of the specific variables that predict test scores, helps identify and control for collinearity of predictive variables, and generates continuous and more reliable norms than those of traditional methods.
Lin, Yan; Xu, Guanhong; Wei, Fangdi; Zhang, Aixia; Yang, Jing; Hu, Qin
2016-03-20
In this present work, a rapid and simple method to detect carcinoembryonic antigen (CEA) was developed by using surface-enhanced Raman spectroscopy (SERS) coupled with antibody-modified Au and γ-Fe2O3@Au nanoparticles. First, Au@Raman reporter and γ-Fe2O3@Au were prepared, and then modified with CEA antibody. When CEA was present, the immuno-Au@Raman reporter and immuno-γ-Fe2O3@Au formed a complex through antibody-antigen-antibody interaction. The selective and sensitive detection of CEA could be achieved by SERS after magnetic separation. Under the optimal conditions, a linear relationship was observed between the Raman peak intensity and the concentration of CEA in the range of 1-50 ng mL(-1) with an excellent correlation coefficient of 0.9942. The limit of detection based on two times ratio of signal to noise was 0.1 ng/mL. The recoveries of CEA standard solution spiked with human serum samples were in the range of 88.5-105.9% with the relative standard deviations less than 17.4%. The method built was applied to the detection of CEA in human serum, and the relative deviations of the analysis results between the present method and electrochemiluminescence immunoassay were all less than 16.6%. The proposed method is practical and has a potential for clinic test of CEA. Copyright © 2016 Elsevier B.V. All rights reserved.
Automated Ecological Assessment of Physical Activity: Advancing Direct Observation.
Carlson, Jordan A; Liu, Bo; Sallis, James F; Kerr, Jacqueline; Hipp, J Aaron; Staggs, Vincent S; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M
2017-12-01
Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82-0.98). Total MET-minutes were slightly underestimated by 9.3-17.1% and the ICCs were good (0.68-0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings.
Automated Ecological Assessment of Physical Activity: Advancing Direct Observation
Carlson, Jordan A.; Liu, Bo; Sallis, James F.; Kerr, Jacqueline; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M.
2017-01-01
Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82–0.98). Total MET-minutes were slightly underestimated by 9.3–17.1% and the ICCs were good (0.68–0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings. PMID:29194358
Singh, R P; Sabarinath, S; Gautam, N; Gupta, R C; Singh, S K
2009-07-15
The present manuscript describes development and validation of LC-MS/MS assay for the simultaneous quantitation of 97/78 and its active in-vivo metabolite 97/63 in monkey plasma using alpha-arteether as internal standard (IS). The method involves a single step protein precipitation using acetonitrile as extraction method. The analytes were separated on a Columbus C(18) (50 mm x 2 mm i.d., 5 microm particle size) column by isocratic elution with acetonitrile:ammonium acetate buffer (pH 4, 10 mM) (80:20 v/v) at a flow rate of 0.45 mL/min, and analyzed by mass spectrometry in multiple reaction-monitoring (MRM) positive ion mode. The chromatographic run time was 4.0 min and the weighted (1/x(2)) calibration curves were linear over a range of 1.56-200 ng/mL. The method was linear for both the analytes with correlation coefficients >0.995. The intra-day and inter-day accuracy (% bias) and precisions (% RSD) of the assay were less than 6.27%. Both analytes were stable after three freeze-thaw cycles (% deviation <8.2) and also for 30 days in plasma (% deviation <6.7). The absolute recoveries of 97/78, 97/63 and internal standard (IS), from spiked plasma samples were >90%. The validated assay method, described here, was successfully applied to the pharmacokinetic study of 97/78 and its active in-vivo metabolite 97/63 in Rhesus monkeys.
Singh, Dhruv K; Mishra, Shraddha
2009-06-30
Ion-imprinted polymers (IIPs) were prepared for uranyl ion (imprint ion) by formation of binary (salicylaldoxime (SALO) or 4-vinylpyridine (VP)) or ternary (salicylaldoxime and 4-vinylpyridine) complex in 2-methoxy ethanol (porogen) following copolymerization with methacrylic acid (MAA) as a functional monomer and ethylene glycol dimethacrylate (EGDMA) as crosslinking monomer using 2,2'-azobisisobutyronitrile as initiator. Control polymers (CPs) were also prepared under identical experimental conditions without using imprint ion. The above synthesized polymers were characterized by surface area measurement, microanalysis and FT-IR analysis techniques. The imprinted polymer formed with ternary complex of UO(2)(2+)-SALO-VP (1:2:2, IIP3) showed quantitative enrichment of uranyl ion from dilute aqueous solution and hence was chosen for detailed studies. The optimal pH for quantitative enrichment is 3.5-6.5. The adsorbed UO(2)(2+) was completely eluted with 10 mL of 1.0 M HCl. The retention capacity of IIP3 was found to be 0.559 mmol g(-1). Further, the distribution ratio and selectivity coefficients of uranium and other selected inorganic ions were also evaluated. Five replicate determinations of 25 microg L(-1) of uranium(VI) gave a mean absorbance of 0.032 with a relative standard deviation of 2.20%. The detection limit corresponding to three times the standard deviation of the blank was found to be 5 microg L(-1). IIP3 was tested for preconcentration of uranium(VI) from ground, river and sea water samples.
Rathee, S; Tu, D; Monajemi, T T; Rickey, D W; Fallone, B G
2006-04-01
We describe the components of a bench-top megavoltage computed tomography (MVCT) scanner that uses an 80-element detector array consisting of CdWO4 scintillators coupled to photodiodes. Each CdWO4 crystal is 2.75 x 8 x 10 mm3. The detailed design of the detector array, timing control, and multiplexer are presented. The detectors show a linear response to dose (dose rate was varied by changing the source to detector distance) with a correlation coefficient (R2) nearly unity with the standard deviation of signal at each dose being less than 0.25%. The attenuation of a 6 MV beam by solid water measured by this detector array indicates a small, yet significant spectral hardening that needs to be corrected before image reconstruction. The presampled modulation transfer function is strongly affected by the detector's large pitch and a large improvement can be obtained by reducing the detector pitch. The measured detective quantum efficiency at zero spatial frequency is 18.8% for 6 MV photons which will reduce the dose to the patient in MVCT applications. The detector shows a less than a 2% reduction in response for a dose of 24.5 Gy accumulated in 2 h; however, the lost response is recovered on the following day. A complete recovery can be assumed within the experimental uncertainty (standard deviation <0.5%); however, any smaller permanent damage could not be assessed.
75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2010-P-0517] Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... from the requirements of the standards of identity issued under section 401 of the Federal Food, Drug...
78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-P-1189] Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... interstate shipment of experimental packs of food varying from the requirements of standards of identity...
Upgraded FAA Airfield Capacity Model. Volume 2. Technical Description of Revisions
1981-02-01
the threshold t k a the time at which departure k is released FIGURE 3-1 TIME AXIS DIAGRAM OF SINGLE RUNWAY OPERATIONS 3-2 J"- SIGMAR the standard...standard deviation of the interarrival time. SIGMAR - the standard deviation of the arrival runway occupancy time. A-5 SINGLE - program subroutine for
Existence of consistent hypo- and hyperresponders to dietary cholesterol in man.
Katan, M B; Beynen, A C; de Vries, J H; Nobels, A
1986-02-01
Hyper- and hyporesponsiveness of serum cholesterol to dietary cholesterol is an established concept in animals but not in man. The authors studied the stability of the individual response of serum cholesterol to dietary cholesterol in three controlled experiments in 1982. The subjects were volunteers from the general population living in or near Wageningen, the Netherlands. Each experiment had a low-cholesterol baseline period (121, 106, and 129 mg/day in experiments 1, 2, and 3, respectively) and a high-cholesterol test period (625, 673, and 989 mg/day). Duplicate portion analysis showed that dietary cholesterol was the only variable. The 94 healthy men and women who completed experiment 1 showed an increase (mean +/- standard deviation (SD] in serum cholesterol of 0.50 +/- 0.39 mmol/liter (19 +/- 15 mg/dl). Seventeen putative hyperresponders, defined by their response in experiment 1, were retested in experiments 2 and 3; they showed responses of 0.28 +/- 0.38 mmol/liter (11 +/- 15 mg/dl) and 0.82 +/- 0.35 mmol/liter (32 +/- 14 mg/dl), respectively. Fifteen hyporesponders, selected in experiment 1, showed responses in experiments 2 and 3 of 0.06 +/- 0.35 mmol/liter (2 +/- 14 mg/dl) and 0.47 +/- 0.26 mmol/liter (18 +/- 10 mg/dl), significantly lower than the corresponding values for hyperresponders. The standardized regression coefficient for individual responses in experiment 2 on those in experiment 1 was beta = 0.34 (p = 0.03, n = 32); the corresponding regression coefficient for experiment 3 and experiment 1 was 0.53 (p less than 0.01). After correction for intraindividual fluctuations the true responsiveness distribution was found to have a between-subject standard deviation of about 0.29 mmol/liter (11 mg/dl). This implies that if the mean response to a certain dietary cholesterol load amounts to e.g., 0.58 mmol/liter (22 mg/dl), then the 16% of subjects least susceptible to diet will experience a rise of only 0.29 mmol/liter (11 mg/dl) or less, while in the 16% of subjects most susceptible to diet, serum cholesterol will rise by 0.87 mmol/liter (34 mg/dl) or more. The authors conclude that modest differences in responsiveness of serum cholesterol to dietary cholesterol do exist in man, and that the wide scatter of responses observed in single experiments is largely due to chance fluctuations.
SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT
Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physicsmore » model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, μ-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for μ-eff were 1470 m{sup −1} mean, 1360 m{sup −1} median, 369 m{sup −1} standard deviation, 933 m{sup −1} minimum and 2260 m{sup −1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRgLITT procedures. This investigation demonstrates the potential for the optimization and validation of more sophisticated bioheat models that incorporate the uncertainty of the data into the predictions, e.g. stochastic finite element methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheong, K; Lee, M; Kang, S
2014-06-01
Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude andmore » the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases. This work was supported by a Korea Science and Engineering Foundation (KOSEF) grant funded by the Korean Ministry of Science, ICT and Future Planning (No. 2013043498)« less
A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.
McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B
2017-02-01
We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.
Determination of total phenolic compounds in compost by infrared spectroscopy.
Cascant, M M; Sisouane, M; Tahiri, S; Krati, M El; Cervera, M L; Garrigues, S; de la Guardia, M
2016-06-01
Middle and near infrared (MIR and NIR) were applied to determine the total phenolic compounds (TPC) content in compost samples based on models built by using partial least squares (PLS) regression. The multiplicative scatter correction, standard normal variate and first derivative were employed as spectra pretreatment, and the number of latent variable were optimized by leave-one-out cross-validation. The performance of PLS-ATR-MIR and PLS-DR-NIR models was evaluated according to root mean square error of cross validation and prediction (RMSECV and RMSEP), the coefficient of determination for prediction (Rpred(2)) and residual predictive deviation (RPD) being obtained for this latter values of 5.83 and 8.26 for MIR and NIR, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
He, Lei; Fan, Tao; Hu, Jianguo; Zhang, Lijin
2015-01-01
In this study, a kind of green solvent named polyethylene glycol (PEG) was developed for the ultrasound-assisted extraction (UAE) of magnolol and honokiol from Cortex Magnoliae Officinalis. The effects of PEG molecular weight, PEG concentration, sample size, pH, ultrasonic power and extraction time on the extraction of magnolol and honokiol were investigated to optimise the extraction conditions. Under the optimal extraction conditions, the PEG-based UAE supplied higher extraction efficiencies of magnolol and honokiol than the ethanol-based UAE and traditional ethanol-reflux extraction. Furthermore, the correlation coefficient (R(2)), repeatability (relative standard deviation, n = 6) and recovery confirmed the validation of the proposed extraction method, which were 0.9993-0.9996, 3.1-4.6% and 92.3-106.8%, respectively.
UV Spectrophotometric Method for Estimation of Polypeptide-K in Bulk and Tablet Dosage Forms
NASA Astrophysics Data System (ADS)
Kaur, P.; Singh, S. Kumar; Gulati, M.; Vaidya, Y.
2016-01-01
An analytical method for estimation of polypeptide-k using UV spectrophotometry has been developed and validated for bulk as well as tablet dosage form. The developed method was validated for linearity, precision, accuracy, specificity, robustness, detection, and quantitation limits. The method has shown good linearity over the range from 100.0 to 300.0 μg/ml with a correlation coefficient of 0.9943. The percentage recovery of 99.88% showed that the method was highly accurate. The precision demonstrated relative standard deviation of less than 2.0%. The LOD and LOQ of the method were found to be 4.4 and 13.33, respectively. The study established that the proposed method is reliable, specific, reproducible, and cost-effective for the determination of polypeptide-k.
Determination of teicoplanin concentrations in serum by high-pressure liquid chromatography.
Joos, B; Lüthy, R
1987-01-01
An isocratic reversed-phase high-pressure liquid chromatographic method for the determination of six components of the teicoplanin complex in biological fluid was developed. By using fluorescence detection after precolumn derivatization with fluorescamine, the assay is specific and highly sensitive, with reproducibility studies yielding coefficients of variation ranging from 1.5 to 8.5% (at 5 to 80 micrograms/ml). Response was linear from 2.5 to 80 micrograms/ml (r = 0.999); the recovery from spiked human serum was 76%. An external quality control was performed to compare this high-pressure liquid chromatographic method (H) with a standard microbiological assay (M); no significant deviation from slope = 1 and intercept = 0 was found by regression analysis (H = 1.03M - 0.45; n = 15). PMID:2957953
NASA Astrophysics Data System (ADS)
Shimizu, Takashi; Kuwahara, Masashi
2014-05-01
We studied the optical properties of In-Ga-Zn-O (IGZO) films and found a very low extinction coefficient of the films. For the potential application of the films, we propose an optical waveguide device made of IGZO. We have succeeded in producing a submicron-scale rectangular-bar structure of IGZO using our newly developed dry etching process. Simulation results showed an ˜5 dB/cm propagation loss of a 400 × 400 nm2 square optical waveguide device of amorphous IGZO at a wavelength of 1.55 µm, when a standard deviation of ˜4 nm and a correlation length of ˜100 nm of sidewall roughness were achieved.
Bayram, Ezgi; Akyilmaz, Erol
2014-12-01
In the biosensor construction, 3-mercaptopropionic acid (3-MPA) and 6-aminocaproic acid (6-ACA) were used for forming self-assembled monolayer (SAM) on a gold disc electrode and pyruvate oxidase was immobilized on the modified electrode surface by using glutaraldehyde. Biosensor response is linearly related to pyruvate concentration at 2.5-50 μM, detection limit is 1.87 μM and response time of the biosensor is 6 s for differential pulse voltammograms. From the repeatability studies (n = 6) for 30.0 μM pyruvate revealed that the average value ([Formula: see text]), standard deviation (S.D) and coefficient of variation (CV %) were calculated to be 31.02 μM, ± 0.1914 μM and 0.62%, respectively.
Shaffer, F. Butler
1976-01-01
Statistics on streamflow for selected periods of time are presented for 28 gaging sites in the Nebraska part of the North and South Platte River basins. Monthly mean discharges, monthly means in percent of annual runoff, standard deviations, coefficients of variation, and monthly extremes are given. Also tabulated are probabilities of high discharges for 1 day and for 3, 7, 15, 30, and 60 consecutive days and of low discharges for 1 day and for 3, 7, 14, 30, and 60 consecutive days. All statistics are based on records that are representative of 1973 conditions of streamflow. Brief historical data are given for 27 of the principal irrigation canals diverting from the North and South Platte Rivers. (Woodard-USGS)
Zhang, Xiuli; Martens, Dieter; Krämer, Petra M; Kettrup, Antonius A; Liang, Xinmiao
2006-01-13
An immunosorbent was fabricated by encapsulation of monoclonal anti-isoproturon antibodies in sol-gel matrix. The immunosorbent-based loading, rinsing and eluting processes were optimized. Based on these optimizations, the sol-gel immunosorbent (SG-IS) selectively extracted isoproturon from an artificial mixture of 68 pesticides. In addition to this high selectivity, the SG-IS proved to be reusable. The SG-IS was combined with liquid chromatography-tandem mass spectrometry (LC-MS-MS) to determine isoproturon in surface water, and the linear range was up to 2.2 microg/l with correlation coefficient higher than 0.99 and relative standard deviation (RSD) lower than 5% (n=8). The limit of quantitation (LOQ) for 25-ml surface water sample was 5 ng/l.
Bauer, Katharina Christin; Göbel, Mathias; Schwab, Marie-Luise; Schermeyer, Marie-Therese; Hubbuch, Jürgen
2016-09-10
The colloidal stability of a protein solution during downstream processing, formulation, and storage is a key issue for the biopharmaceutical production process. Thus, knowledge about colloidal solution characteristics, such as the tendency to form aggregates or high viscosity, at various processing conditions is of interest. This work correlates changes in the apparent diffusion coefficient as a parameter of protein interactions with observed protein aggregation and dynamic viscosity of the respective protein samples. For this purpose, the diffusion coefficient, the protein phase behavior, and the dynamic viscosity in various systems containing the model proteins α-lactalbumin, lysozyme, and glucose oxidase were studied. Each of these experiments revealed a wide range of variations in protein interactions depending on protein type, protein concentration, pH, and the NaCl concentration. All these variations showed to be mirrored by changes in the apparent diffusion coefficient in the respective samples. Whereas stable samples with relatively low viscosity showed an almost linear dependence, the deviation from the concentration-dependent linearity indicated both an increase in the sample viscosity and probability of protein aggregation. This deviation of the apparent diffusion coefficient from concentration-dependent linearity was independent of protein type and solution properties for this study. Thus, this single parameter shows the potential to act as a prognostic tool for colloidal stability of protein solutions. Copyright © 2016 Elsevier B.V. All rights reserved.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Barbado, David; Moreside, Janice; Vera-Garcia, Francisco J
2017-03-01
Although unstable seat methodology has been used to assess trunk postural control, the reliability of the variables that characterize it remains unclear. To analyze reliability and learning effect of center of pressure (COP) and kinematic parameters that characterize trunk postural control performance in unstable seating. The relationships between kinematic and COP parameters also were explored. Test-retest reliability design. Biomechanics laboratory setting. Twenty-three healthy male subjects. Participants volunteered to perform 3 sessions at 1-week intervals, each consisting of five 70-second balancing trials. A force platform and a motion capture system were used to measure COP and pelvis, thorax, and spine displacements. Reliability was assessed through standard error of measurement (SEM) and intraclass correlation coefficients (ICC 2,1 ) using 3 methods: (1) comparing the last trial score of each day; (2) comparing the best trial score of each day; and (3) calculating the average of the three last trial scores of each day. Standard deviation and mean velocity were calculated to assess balance performance. Although analyses of variance showed some differences in balance performance between days, these differences were not significant between days 2 and 3. Best result and average methods showed the greatest reliability. Mean velocity of the COP showed high reliability (0.71 < ICC < 0.86; 10.3 < SEM < 13.0), whereas standard deviation only showed a low to moderate reliability (0.37 < ICC < 0.61; 14.5 < SEM < 23.0). Regarding the kinematic variables, only pelvis displacement mean velocity achieved a high reliability using the average method (0.62 < ICC < 0.83; 18.8 < SEM < 23.1). Correlations between COP and kinematics were high only for mean velocity (0.45
Berenguer, Roberto; Pastor-Juan, María Del Rosario; Canales-Vázquez, Jesús; Castro-García, Miguel; Villas, María Victoria; Legorburo, Francisco Mansilla; Sabater, Sebastià
2018-04-24
Purpose To identify the reproducible and nonredundant radiomics features (RFs) for computed tomography (CT). Materials and Methods Two phantoms were used to test RF reproducibility by using test-retest analysis, by changing the CT acquisition parameters (hereafter, intra-CT analysis), and by comparing five different scanners with the same CT parameters (hereafter, inter-CT analysis). Reproducible RFs were selected by using the concordance correlation coefficient (as a measure of the agreement between variables) and the coefficient of variation (defined as the ratio of the standard deviation to the mean). Redundant features were grouped by using hierarchical cluster analysis. Results A total of 177 RFs including intensity, shape, and texture features were evaluated. The test-retest analysis showed that 91% (161 of 177) of the RFs were reproducible according to concordance correlation coefficient. Reproducibility of intra-CT RFs, based on coefficient of variation, ranged from 89.3% (151 of 177) to 43.1% (76 of 177) where the pitch factor and the reconstruction kernel were modified, respectively. Reproducibility of inter-CT RFs, based on coefficient of variation, also showed large material differences, from 85.3% (151 of 177; wood) to only 15.8% (28 of 177; polyurethane). Ten clusters were identified after the hierarchical cluster analysis and one RF per cluster was chosen as representative. Conclusion Many RFs were redundant and nonreproducible. If all the CT parameters are fixed except field of view, tube voltage, and milliamperage, then the information provided by the analyzed RFs can be summarized in only 10 RFs (each representing a cluster) because of redundancy. © RSNA, 2018 Online supplemental material is available for this article.
NASA Astrophysics Data System (ADS)
Hurdebise, Quentin; Heinesch, Bernard; De Ligne, Anne; Vincke, Caroline; Aubinet, Marc
2017-04-01
Long-term data series of carbon dioxide and other gas exchanges between terrestrial ecosystems and atmosphere become more and more numerous. Long-term analyses of such exchanges require a good understanding of measurement conditions during the investigated period. Independently of climate drivers, measurements may indeed be influenced by measurement conditions themselves subjected to long-term variability due to vegetation growth or set-up changes. The present research refers to the Vielsalm Terrestrial Observatory (VTO) an ICOS candidate site located in a mixed forest (beech, silver fir, Douglas fir, Norway spruce) in the Belgian Ardenne. Fluxes of momentum, carbon dioxide and sensible heat have been continuously measured there by eddy covariance for more than 20 years. During this period, changes in canopy height and measurement height occurred. The correlation coefficients (for momemtum, sensible heat and CO2) and the normalized standard deviations measured for the past 20 years at the Vielsalm Terrestrial Observatory (VTO) were analysed in order to define how the fluxes, independently of climate conditions, were affected by the surrounding environment evolution, including tree growth, forest thinning and tower height change. A relationship between canopy aerodynamic distance and the momentum correlation coefficient was found which is characteristic of the roughness sublayer, and suggests that momentum transport processes were affected by z-d. In contrast, no relationship was found for sensible heat and CO2 correlation coefficients, suggesting that the z-d variability observed did not affect their turbulent transport. There were strong differences in these coefficients, however, between two wind sectors, characterized by contrasted stands (height differences, homogeneity) and different hypotheses were raised to explain it. This study highlighted the importance of taking the surrounding environment variability into account in order to ensure the spatio-temporal consistency of datasets.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi
2015-01-01
Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p < 0.01). With a cutoff value for standard deviation of 10.5, lung cancer could be diagnosed with an accuracy of 81.7%. Other characteristics investigated were inferior when compared to histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
NASA Technical Reports Server (NTRS)
Wu, J.; Yu, K. M.; Walukiewicz, W.; Shan, W.; Ager, J. W., III; Haller, E. E.; Miotkowski, I.; Ramdas, A. K.; Su, Ching-Hua
2003-01-01
Optical absorption experiments have been performed using diamond anvil cells to measure the hydrostatic pressure dependence of the fundamental bandgap of ZnSe(sub 1-xTe(sub x) alloys over the entire composition range. The first and second-order pressure coefficients are obtained as a function of composition. Starting from the ZnSe side, the magnitude of both coefficients increases slowly until x approx. 0.7, where the ambient-pressure bandgap reaches a minimum. For larger values of x the coefficients rapidly approach the values of ZnTe. The large deviations of the pressure coefficients from the linear interpolation between ZnSe and ZnTe are explained in terms of the band anticrossing model.
Optimizing Noble Gas-Water Interactions via Monte Carlo Simulations.
Warr, Oliver; Ballentine, Chris J; Mu, Junju; Masters, Andrew
2015-11-12
In this work we present optimized noble gas-water Lennard-Jones 6-12 pair potentials for each noble gas. Given the significantly different atomic nature of water and the noble gases, the standard Lorentz-Berthelot mixing rules produce inaccurate unlike molecular interactions between these two species. Consequently, we find simulated Henry's coefficients deviate significantly from their experimental counterparts for the investigated thermodynamic range (293-353 K at 1 and 10 atm), due to a poor unlike potential well term (εij). Where εij is too high or low, so too is the strength of the resultant noble gas-water interaction. This observed inadequacy in using the Lorentz-Berthelot mixing rules is countered in this work by scaling εij for helium, neon, argon, and krypton by factors of 0.91, 0.8, 1.1, and 1.05, respectively, to reach a much improved agreement with experimental Henry's coefficients. Due to the highly sensitive nature of the xenon εij term, coupled with the reasonable agreement of the initial values, no scaling factor is applied for this noble gas. These resulting optimized pair potentials also accurately predict partitioning within a CO2-H2O binary phase system as well as diffusion coefficients in ambient water. This further supports the quality of these interaction potentials. Consequently, they can now form a well-grounded basis for the future molecular modeling of multiphase geological systems.
Xiong, Jianyin; Yao, Yuan; Zhang, Yinping
2011-04-15
The initial emittable concentration (C(m,0)), the diffusion coefficient (D(m)), and the material/air partition coefficient (K) are the three characteristic parameters influencing emissions of formaldehyde and volatile organic compounds (VOCs) from building materials or furniture. It is necessary to determine these parameters to understand emission characteristics and how to control them. In this paper we develop a new method, the C-history method for a closed chamber, to measure these three parameters. Compared to the available methods of determining the three parameters described in the literature, our approach has the following salient features: (1) the three parameters can be simultaneously obtained; (2) it is time-saving, generally taking less than 3 days for the cases studied (the available methods tend to need 7-28 days); (3) the maximum relative standard deviations of the measured C(m,0), D(m) and K are 8.5%, 7.7%, and 9.8%, respectively, which are acceptable for engineering applications. The new method was validated by using the characteristic parameters determined in the closed chamber experiment to predict the observed emissions in a ventilated full scale chamber experiment, proving that the approach is reliable and convincing. Our new C-history method should prove useful for rapidly determining the parameters required to predict formaldehyde and VOC emissions from building materials as well as for furniture labeling.
NASA Astrophysics Data System (ADS)
Atieh, M.; Mehltretter, S. L.; Gharabaghi, B.; Rudra, R.
2015-12-01
One of the most uncertain modeling tasks in hydrology is the prediction of ungauged stream sediment load and concentration statistics. This study presents integrated artificial neural networks (ANN) models for prediction of sediment rating curve parameters (rating curve coefficient α and rating curve exponent β) for ungauged basins. The ANN models integrate a comprehensive list of input parameters to improve the accuracy achieved; the input parameters used include: soil, land use, topographic, climatic, and hydrometric data sets. The ANN models were trained on the randomly selected 2/3 of the dataset of 94 gauged streams in Ontario, Canada and validated on the remaining 1/3. The developed models have high correlation coefficients of 0.92 and 0.86 for α and β, respectively. The ANN model for the rating coefficient α is directly proportional to rainfall erosivity factor, soil erodibility factor, and apportionment entropy disorder index, whereas it is inversely proportional to vegetation cover and mean annual snowfall. The ANN model for the rating exponent β is directly proportional to mean annual precipitation, the apportionment entropy disorder index, main channel slope, standard deviation of daily discharge, and inversely proportional to the fraction of basin area covered by wetlands and swamps. Sediment rating curves are essential tools for the calculation of sediment load, concentration-duration curve (CDC), and concentration-duration-frequency (CDF) analysis for more accurate assessment of water quality for ungauged basins.
The Space Time Asymmetry Research Mission
NASA Astrophysics Data System (ADS)
Scargle, Jeffrey; Goebel, John; Buchman, Sasha; Byer, Robert; Sun, Ke-Xun; Lipa, John; Chu-Thielbar, Lisa; Hall, John
We will use precision molecular iodine stabilized Nd:YAG laser interferometers to search for small deviations from Lorentz Invariance, a cornerstone of relativity and particle physics, and thus our understanding of the Universe. A Lorentz violation would have profound implications for cosmology and particle physics. An improved null result will constrain theories attempting to unite particle physics and gravity. Science Objectives: Measure the absolute anisotropy of the velocity of light to 10-18 (100-fold improvement) Derive the Michelson-Morley coefficient to 10-12 (100-fold improvement) Derive the Kennedy-Thorndyke coefficient to 7x10-10 (400-fold improvement) Derive the coefficients of Lorentz violation in the Standard Model Extension, in the range 7x10-18 to 10-14 (50 to 500-fold improvement) Thermal control, stabilization and uniformitization are great concerns, so new technology has been devised that keeps these parameters within strict specified limits. Thereby STAR is able to operate effectively in all possible orbits. The spacecraft is based on a bus development by NASA Ames Research Center. STAR is designed to fly as a secondary payload on a Delta IV launch vehicle with an ESPA ring into an 850 km circular orbit. It will have a one-year mission and is capable of even longer duration. Other orbit options are possible depending on the launch opportunities available. The STAR project is a partnership between Stanford University, NASA Ames Research Center and NASA Goddard Space Flight Center.
Measurement of CO 2, CO, SO 2, and NO emissions from coal-based thermal power plants in India
NASA Astrophysics Data System (ADS)
Chakraborty, N.; Mukherjee, I.; Santra, A. K.; Chowdhury, S.; Chakraborty, S.; Bhattacharya, S.; Mitra, A. P.; Sharma, C.
Measurements of CO 2 (direct GHG) and CO, SO 2, NO (indirect GHGs) were conducted on-line at some of the coal-based thermal power plants in India. The objective of the study was three-fold: to quantify the measured emissions in terms of emission coefficient per kg of coal and per kWh of electricity, to calculate the total possible emission from Indian thermal power plants, and subsequently to compare them with some previous studies. Instrument IMR 2800P Flue Gas Analyzer was used on-line to measure the emission rates of CO 2, CO, SO 2, and NO at 11 numbers of generating units of different ratings. Certain quality assurance (QA) and quality control (QC) techniques were also adopted to gather the data so as to avoid any ambiguity in subsequent data interpretation. For the betterment of data interpretation, the requisite statistical parameters (standard deviation and arithmetic mean) for the measured emissions have been also calculated. The emission coefficients determined for CO 2, CO, SO 2, and NO have been compared with their corresponding values as obtained in the studies conducted by other groups. The total emissions of CO 2, CO, SO 2, and NO calculated on the basis of the emission coefficients for the year 2003-2004 have been found to be 465.667, 1.583, 4.058, and 1.129 Tg, respectively.
Is reticular temperature a useful indicator of heat stress in dairy cattle?
Ammer, S; Lambertz, C; Gauly, M
2016-12-01
The present study investigated whether reticular temperature (RT) in dairy cattle is a useful indicator of heat stress considering the effects of milk yield and water intake (WI). In total, 28 Holstein-Friesian dairy cows raised on 3 farms in Lower Saxony, Germany, were studied from March to December 2013. During the study, RT and barn climate parameters (air temperature, relative humidity) were measured continuously and individual milk yield was recorded daily. Both the daily temperature-humidity index (THI) and the daily median RT per cow were calculated. Additionally, the individual WI (amount and frequency) of 10 cows during 100d of the study was recorded on 1 farm. Averaged over all farms, daily THI ranged between 35.4 and 78.9 with a mean (±standard deviation) of 60.2 (±8.7). Dairy cows were on average (±standard deviation) 110.9d in milk (±79.3) with a mean (±standard deviation) milk yield of 35.2kg/d (±9.1). The RT was affected by THI, milk yield, days in milk, and WI. Up to a THI threshold of 65, RT remained constant at 39.2°C. Above this threshold, RT increased to 39.3°C and further to 39.4°C when THI ≥70. The correlation between THI ≥70 and RT was 0.22, whereas the coefficient ranged between r=-0.08 to +0.06 when THI <70. With increasing milk yield, RT decreased slightly from 39.3°C (<30kg/d) to 39.2°C (≥40kg/d). For daily milk yields of ≥40kg, the median RT and daily milk yield were correlated at r=-0.18. The RT was greater when dairy cows yielded ≥30kg/d and THI ≥70 (39.5°C) compared with milk yields <30kg and THI <70 (39.3°C). The WI, which averaged (±standard deviation) 11.5 l (±5.7) per drinking bout, caused a mean decrease in RT of 3.2°C and was affected by the amount of WI (r=0.60). After WI, it took up to 2h until RT reached the initial level before drinking. In conclusion, RT increased when the THI threshold of 65 was exceeded. A further increase was noted when THI ≥70. Nevertheless, the effects of WI and milk yield have to be considered carefully when RT is used to detect hyperthermia in dairy cattle. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gülnahar, Murat
2014-12-01
In this study, the current-voltage (I-V) and capacitance-voltage (C-V) measurements of an Au/4H-SiC Schottky diode are characterized as a function of the temperature in 50-300 K temperature range. The experimental parameters such as ideality factor and apparent barrier height presents to be strongly temperature dependent, that is, the ideality factor increases and the apparent barrier height decreases with decreasing temperature, whereas the barrier height values increase with the temperature for C-V data. Likewise, the Richardson plot deviates at low temperatures. These anomaly behaviors observed for Au/4H-SiC are attributed to Schottky barrier inhomogeneities. The barrier anomaly which relates to interface of Au/4H-SiC is also confirmed by the C-V measurements versus the frequency measured in 300 K and it is interpreted by both Tung's lateral inhomogeneity model and multi-Gaussian distribution approach. The values of the weighting coefficients, standard deviations and mean barrier height are calculated for each distribution region of Au/4H-SiC using the multi-Gaussian distribution approach. In addition, the total effective area of the patches NAe is obtained at separate temperatures and as a result, it is expressed that the low barrier regions influence meaningfully to the current transport at the junction. The homogeneous barrier height value is calculated from the correlation between the ideality factor and barrier height and it is noted that the values of standard deviation from ideality factor versus q/3kT curve are in close agreement with the values obtained from the barrier height versus q/2kT variation. As a result, it can be concluded that the temperature dependent electrical characteristics of Au/4H-SiC can be successfully commented on the basis of the thermionic emission theory with both models.
NASA Astrophysics Data System (ADS)
Monaghan, Kari L.
The problem addressed was the concern for aircraft safety rates as they relate to the rate of maintenance outsourcing. Data gathered from 14 passenger airlines: AirTran, Alaska, America West, American, Continental, Delta, Frontier, Hawaiian, JetBlue, Midwest, Northwest, Southwest, United, and USAir covered the years 1996 through 2008. A quantitative correlational design, utilizing Pearson's correlation coefficient, and the coefficient of determination were used in the present study to measure the correlation between variables. Elements of passenger airline aircraft maintenance outsourcing and aircraft accidents, incidents, and pilot deviations within domestic passenger airline operations were analyzed, examined, and evaluated. Rates of maintenance outsourcing were analyzed to determine the association with accident, incident, and pilot deviation rates. Maintenance outsourcing rates used in the evaluation were the yearly dollar expenditure of passenger airlines for aircraft maintenance outsourcing as they relate to the total airline aircraft maintenance expenditures. Aircraft accident, incident, and pilot deviation rates used in the evaluation were the yearly number of accidents, incidents, and pilot deviations per miles flown. The Pearson r-values were calculated to measure the linear relationship strength between the variables. There were no statistically significant correlation findings for accidents, r(174)=0.065, p=0.393, and incidents, r(174)=0.020, p=0.793. However, there was a statistically significant correlation for pilot deviation rates, r(174)=0.204, p=0.007 thus indicating a statistically significant correlation between maintenance outsourcing rates and pilot deviation rates. The calculated R square value of 0.042 represents the variance that can be accounted for in aircraft pilot deviation rates by examining the variance in aircraft maintenance outsourcing rates; accordingly, 95.8% of the variance is unexplained. Suggestions for future research include replication of the present study with the inclusion of maintenance outsourcing rate data for all airlines differentiated between domestic and foreign repair station utilization. Replication of the present study every five years is also encouraged to continue evaluating the impact of maintenance outsourcing practices on passenger airline safety.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Barnes, Robert A.; Eplee, Robert E., Jr.; Biggar, Stuart F.; Thome, Kurtis J.; Zalewski, Edward F.; Slater, Philip N.; Holmes, Alan W.
1999-01-01
The solar radiation-based calibration (SRBC) of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) was performed on 1 November 1993. Measurements were made outdoors in the courtyard of the instrument manufacturer. SeaWiFS viewed the solar irradiance reflected from the sensor's diffuser in the same manner as viewed on orbit. The calibration included measurements using a solar radiometer designed to determine the transmittances of principal atmospheric constituents. The primary uncertainties in the outdoor measurements are the transmission of the atmosphere and the reflectance of the diffuser. Their combined uncertainty is about 5 or 6%. The SRBC also requires knowledge of the extraterrestrial solar spectrum. Four solar models are used. When averaged over the responses of the SeaWiFS bands, the irradiance models agree at the 3.6% level, with the greatest difference for SeaWiFS band 8. The calibration coefficients from the SRBC are lower than those from the laboratory calibration of the instrument in 1997. For a representative solar model, the ratios of the SRBC coefficients to laboratory values average 0.962 with a standard deviation of 0.012. The greatest relative difference is 0.946 for band 8. These values are within the estimated uncertainties of the calibration measurements. For the transfer-to-orbit experiment, the measurements in the manufacturer's courtyard are used to predict the digital counts from the instrument on its first day on orbit (August 1, 1997). This experiment requires an estimate of the relative change in the diffuser response for the period between the launch of the instrument and its first solar measurements on orbit (September 9, 1997). In relative terms, the counts from the instrument on its first day on orbit averaged 1.3% higher than predicted, with a standard deviation of 1.2% and a greatest difference of 2.4% or band 7. The estimated uncertainty for the transfer-to-orbit experiment is about 3 or 4%.